What is it about?
Holograms recreate 3D scenes by controlling how light waves interfere and bend — not just their brightness, but also their timing (phase). Today's methods compute a hologram for one specific viewpoint; move the camera, and the entire calculation must restart from scratch. We propose a new way to represent 3D scenes using building blocks called "complex-valued 3D Gaussians" that store both brightness and phase as built-in properties tied to the scene's geometry. Because these properties travel with the scene rather than being locked to a single camera position, our method can produce holograms for new viewpoints instantly — 30 to 10,000 times faster than previous approaches — while still showing realistic depth-of-field blur and maintaining comparable image quality.
Featured Image
Photo by Rohit Choudhari on Unsplash
Why is it important?
Holographic displays are considered a next-generation 3D screen technology, but their adoption is held back by the enormous cost of recomputing holograms every time the viewer moves. Our work removes this bottleneck by embedding wave-optics properties directly into the 3D scene representation itself, rather than treating hologram generation as a separate post-processing step. This is timely because 3D Gaussian Splatting has recently become the dominant real-time scene representation in computer graphics, yet it only models light intensity. By extending it to handle amplitude and phase, our method bridges the gap between mainstream neural rendering and physically accurate holography, making real-time, view-consistent holographic rendering practical for the first time.
Perspectives
This project started from a simple question: can we make holograms that "know" the 3D scene they represent, the same way modern radiance fields do? Existing hologram methods treat each viewpoint as an isolated problem, discarding geometric understanding between views. By rethinking Gaussian primitives as complex-valued wave emitters rather than colored blobs, we found that amplitude and phase can be learned as intrinsic scene properties — no per-view recalculation needed. The most exciting outcome for me is that the resulting holograms produce natural defocus blur that closely matches physics-based optimization, suggesting the representation captures genuine wave-optics behavior. I see this as a first step toward a future where holographic displays are as easy to drive as conventional screens.
Yicheng Zhan
University College London
Read the Original
This page is a summary of: Complex-Valued Holographic Radiance Fields, ACM Transactions on Graphics, March 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3804450.
You can read the full text:
Contributors
The following have contributed to this page







