Pixels, pixels, pixels …


By Achin Bhowmik

“How many pixels are really needed for immersive visual experiences with a virtual reality (VR) head-mounted display (HMD)?” This is one of the most popular questions that I got during and after the short course I taught at this year’s Display Week.

So I thought I would reflect over this a bit, and point to some recent developments and trends in the display industry as gleaned from the presentations and demonstrations at this year’s event.
First, let’s consider some basic, back-of-an-envelope, math and calculations. Here are some facts related to the human visual system. An ideal human eye has an angular resolution of about 1/60th of a degree at the central vision. Each eye has a horizontal field-of-view (FOV) of ~160° and a vertical FOV of ~175°. The two eyes work together for stereoscopic depth perception over ~120° wide and ~135° high FOV.

Since the current manufacturing processes for both the liquid-crystal displays (LCDs) and organic light-emitting diode displays (OLED) produce a uniform pixel density across the entire surface of the spatial light modulators, the numbers above yield a whopping ~100 megapixels for each eye and ~60 megapixels for stereo vision.

While this would provide perfect 20/20 visual acuity, packing such a high number of pixels into the small screens in a VR HMD is obviously not feasible with current technologies. To put this into context, the two displays in the HTC Vive HMD consist of a total of 2.6 megapixels, resulting in quite visible pixilation artifacts. Most people in the short course raised hands to a question about whether pixel densities in current VR HMDs were unacceptable (I suspect the rest were just too lazy to raise hands!).

Even if it were possible to make VR displays with 60–100 million pixels, there are other system-level constraints that make it impractical. One is the graphics and computation resources necessary to create enough polygons to render the visual richness to match such high pixel density on the screens. Next, the current bandwidth capabilities cannot support transporting such enormous amounts of data between the computation engines, memory devices, and the display screens, and at the same time meet the stringent latency requirements for VR.

So… is this the dead end? The answer is a resounding “no”! Challenges such as these are what innovators and engineers live for! Let’s look at biology for some clues. How does the human visual system address this dilemma? It turns out that high human visual acuity is limited to a very small visual field… about +/- 1° around the optical axis of the eye, centered on the fovea. So, if we could track the user’s eye gaze in real time, we could render a high number of polygons in a small area around the viewing direction and drop it exponentially as we move away from it. Graphics engineers have a term for such technologies already in exploration… “foveated” or “foveal” rendering. This would drastically reduce the graphics workload and associated power consumption problems.
Clearly, we are still in the early days of VR, with many challenges remaining to be solved, including presenting adequate visual acuity on the displays. So… this is an exciting time for the display industry and engineers, reminiscent of the days at the onset of display revolutions for HDTVs and smartphones.


Looking forward to the next Display Week!

Comments

Popular posts from this blog

Taica Demonstrates Egg-Friendly Technology

The Most Valuable Part of Display Week