Goto

Collaborating Authors

Holograms on the Horizon?

Communications of the ACM

Researchers at the Massachusetts Institute of Technology (MIT) have used machine learning to reduce the processing power needed to render convincing holographic images, making it possible to generate them in near-real time on consumer-level computer hardware. Such a method could pave the way to portable virtual-reality systems that use holography instead of stereoscopic displays. Stereo imagery can present the illusion of three-dimensionality, but users often complain of dizziness and fatigue after long periods of use because there is a mismatch between where the brain expects to focus and the flat focal plane of the two images. Switching to holographic image generation overcomes this problem; it uses interference in the patterns of many light beams to construct visible shapes in free space that present the brain with images it can more readily accept as three-dimensional (3D) objects. "Holography in its extreme version produces a full optical reproduction of the image of the object. There should be no difference between the image of the object and the object itself," says Tim Wilkinson, a professor of electrical engineering at Jesus College of the U.K.'s University of Cambridge.


NeRF Research Turns 2D Photos Into 3D Scenes

#artificialintelligence

When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds. Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The NVIDIA Research team has developed an approach that accomplishes this task almost instantly -- making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering. NVIDIA applied this approach to a popular new technology called neural radiance fields, or NeRF.


Nvidia's new AI magic turns 2D photos into 3D graphics

#artificialintelligence

Nvidia has made another attempt to add depth to shallow graphics. After converting 2D images into 3D scenes, models, and videos, the company has turned its focus to editing. The GPU giant today unveiled a new AI method that transforms still photos into 3D objects that creators can modify with ease. Dubbed 3D MoMa, the technique could give game studios a simple way to alter images and scenes. This typically relies on time-consuming photogrammetry, which takes measurements from photos. This process uses AI to estimate a scene's physical attributes -- from geometry to lighting -- by analyzing still images.


Nvidia's latest graphics research takes images from 2D to 3D almost instantly

ZDNet

The metaverse is taking shape right now, and Nvidia has gone all in, introducing a robust set of tools to build it. But even for a graphics pioneer like Nvidia, rendering 3D worlds is a complicated technical challenge. At its spring Graphics Technology Conference (GTC) this week, Nvidia demonstrated a new approach to inverse rendering -- the process of reconstructing 3D scenes from a handful of 2D images. Inverse rendering uses AI to approximate how light behaves in the real world. With the approach developed by the Nvidia Research team, the whole process happens almost instantly.


Using AI to create better virtual reality experiences

#artificialintelligence

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds and experiences. While the technology is already popular among consumers for its immersive quality, there could be a future where the holographic displays look even more like real life. In their own pursuit of these better displays, the Stanford Computational Imaging Lab has combined their expertise in optics and artificial intelligence. Their most recent advances in this area are detailed in a paper published Nov. 12 in Science Advances and work that will be presented at SIGGRAPH ASIA 2021 in December. At its core, this research confronts the fact that current augmented and virtual reality displays only show 2D images to each of the viewer's eyes, instead of 3D – or holographic – images like we see in the real world.