Copied link

Download our whitepaper

It will explain more about Computer-Generated Holography and why it represents the future of AR wearables.

VividQ will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

Thank you for your interest in VividQ. Your submission has been received and we will be in touch shortly.
Oops! Something went wrong while submitting the form.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

SPIE Fireside Chat: Computer-Generated Holography and Next-Generation AR Display

December 18, 2020
4
min read
SPIE Fireside Chat: Computer-Generated Holography and Next-Generation AR Display

According to recent research from IDTechEx, innovations in optics and micro-displays are set to play a huge part in the future of augmented reality (AR), predicted to reach the $28bn+ market size in 2030. During the December SPIE AR, VR, MR Fireside Chat, VividQ CEO Darran Milne joined Bernard Kress, Partner Optical Architect at Microsoft, and Christophe Peroz, ex-Magic Leap, to discuss how Computer-Generated Holography is another key element to achieving consumer mass adoption of AR.

Holographic display is the process of directly engineering light to project three-dimensional virtual objects and scenes that possess a natural depth of field. As far as AR displays go, this is a real game-changer. Computer-generated holography (CGH) provides the immersive AR experience we have always wanted. Unlike in traditional stereoscopic AR, holographic content can be displayed at arm's length, and fully interactive. In his Fireside Chat, Darran has explained why CGH is critical for the future of AR.

What do we want from an AR display?

There are several criteria that the AR display has to meet to enable mass consumer adoption. These include:

  1. Comfortable Field of View (FOV): The FOV can be dependent on the type of AR experience. For simple text or icons on display, a simple 30° field of view will be sufficient. For fully immersive scenes, a larger FOV with peripheral vision is required.
  2. Low power use: The impact of battery life and usage is important because it influences many other features in AR devices, like form factor, due to considerations around heat dissipation.
  3. High image quality: For the best AR experience, digital images need to have high contrast, and cannot suffer from excessive transparency or speckle from light sources such as lasers.
  4. High and adjustable brightness: For outdoor environments, AR experiences require dynamically controlled brightness settings to remain visible.
  5. Small form factor: A small, lightweight device is especially important for all-day long-term viewing comfort without eye fatigue.

One other feature of AR displays that should not be ignored is realistic depth of field. In complex AR experiences where virtual images need to appear in the context of the real environment, realistic depth of field is required for the scene to match up or look convincing. In currently available AR/VR devices which rely on stereoscopic display, virtual objects are confined to a single depth plane. This causes many issues with our visual systems and produces an effect known as vergence-accommodation conflict (VAC). Put simply, AR/VR devices today are configured so the left and right eye are each given a slightly offset image. This forces a viewer's eyes to converge to a specific point in the distance. However, since these devices do not possess any intrinsic depth of field, viewers are always constrained to looking at a single plane of depth. This causes a viewer's senses to become off-balance - one sense is telling the viewer that their eye is converging way out in the distance, while the other is focusing on the foreground. This causes eye strain and leads to nausea and headaches in at least 20% of the population. If AR is aiming for mass consumer adoption, a device must be comfortably worn for 8+ hours a day.

How do we get to the real 3D display?

Most AR or 3D displays today use stereoscopic effects to produce ‘3D-like’ images. In autostereo displays (most commonly known from the Nintendo 3DS, and applied recently by Sony in their Spatial Reality Device), different images are presented to the left and right eye, creating a 3D-like illusion. There is no real depth in the image, and the user cannot focus on different parts of it. Recently, new ways of producing real three-dimensional scenes in an AR environment have emerged, allowing them to defocus at the same rate as the environment. For example, light field displays present multiple flat views of a scene to achieve that. These displays, however, suffer from an intrinsic trade-off between field of view, depth of field and image resolution. The resolution of the display is chopped in order to create multiple views, leading to low resolution and discontinuous jumps from view to view. For light field display technology to be successful, new hardware developments are required. Companies in the field are relying on 8K, 16K or even higher resolution displays with very small pixels, opening up potential bandwidth and compute issues.

Holographic display: the next generation of AR display technology

A third display type, based on computer-generated holography (CGH), contains all necessary depth information to create a fully realistic, three-dimensional AR display. Content presented on a holographic display can be placed at a distance of between 10cm from the viewer, to optical infinity. Even small head movements can change the view of a holographic scene, thanks to parallax, just like we experience in the real world.

But what does a holographic display really mean? To answer this question it is good to consider how we would go about constructing a perfect digital display. For this, we can take our inspiration from nature. When we look at a real object, we are looking at light from a light source, reflecting off an object’s surface, and forming a complex wavefront that hits our eye. In CGH, instead of trying to arrange multiple sub-images to offer us multiple views of an object, we compute the exact wavefront that is reflecting off an object and present this information to the eye. The wavefront contains all the same depth and colour information as the reference object, so the holographic projection of it should be visually indistinguishable from it. To achieve that, we use a light source i.e. a laser, a phase LCoS (liquid crystal on silicon) device to modulate the light, and a digital data source, such as a game engine, depth-sensing camera or CAD studio.

What does the ideal holographic display look like?

To create high-quality holographic displays, we need to be able to compute complex wave patterns in real-time. This is what VividQ’s Software Development Kit can be used for. An output from our SDK - an interference pattern - is then presented on an LCoS micro-display to adjust the phase of incoming light waves per pixel, so that the reflected light interferes to create a holographic image. An interference pattern can be thought of as a set of instructions presented to a micro-display, telling the light how to behave. This is what we call a ‘hologram’. The reflected light shows the ‘holographic image’.

In CGH projections, viewers are not looking directly at the two-dimensional display, like in other traditional AR solutions. The micro-display, in this case, acts as a way to engineer the wavefront into the three-dimensional holographic projection - an output that can be displayed anywhere, either in front of, or behind the display.

VividQ’s proprietary SDK provides solutions for the mass adoption of CGH. It enables a real-time computation of holograms on mobile GPUs and processors, and achieves high-quality projection on off-the-shelf micro-displays. This makes it possible to scale the resolution of micro-displays used for CGH from 720p to 1080p resolution, with the same efficiency and image quality. VividQ enables this big shift in CGH by moving away from a ray-traced, point-based calculation, to a highly optimised Fast Fourier Transform (FFT) method. This way, CGH becomes a scalable AR display technology. You can read more about how VividQ SDK achieves superior performance on mobile GPUs on the Arm blog here.

What are the other benefits of holographic display?

CGH is known as the pinnacle of display technologies, offering many other advantages in AR systems, beyond the realistic depth of field:

  • Easy aberration corrections: In holographic displays, specific optical elements can be calculated as part of the hologram itself. For example, aberrations from misaligned or cheaper lenses, used as part of the AR display system, can be corrected for in the software, making the resulting projection high quality regardless.
  • High Brightness: CGH displays using phase LCoS devices are extremely power efficient. By directing all laser light to create an image, holographic displays can achieve high brightness in sparse scenes, with very low power input.
  • Dynamic Layer Allocation: Very complete holographic scenes can be composed of tens of individual focal planes. Majority of AR applications, however, require up to 8 depth planes to be presented to the viewer. By allocating them dynamically on a per-frame basis in CGH, the overall overhead of AR display systems can be reduced significantly.

To watch the SPIE Fireside Chat featuring Darran Milne, click here.

To learn more about applications of CGH, you can download VividQ's latest whitepaper 'Holography: The Future of Augmented Reality Wearables' here.