Nvidia Taught an AI to Instantly Generate Fully-Textured 3D Models From Flat 2D Images

#artificialintelligence 

Turning a sketch or photo of an object into a fully realized 3D model so that it can be duplicated using a 3D printer, played in a video game, or brought to life in a movie through visual effects, requires the skills of a digital modeler working from a stack of images. But Nvidia has successfully trained a neural network to generate fully-textured 3D models based on just a single photo. We've seen similar approaches to automatically generating 3D models before, but they've either required a series of photos snapped from many different angles for accurate results or input from a human user to help the software figure out the dimensions and shape of a specific object in an image. Neither are wrong approaches to the problem; any improvements made to the task of 3D modeling are welcome as they make such tools available to a wider audience, even those lacking advanced skills. But they also limit the potential uses for such software. At the annual Conference on Neural Information Processing Systems which is taking place in Vancouver, British Columbia, this week, researchers from Nvidia will be presenting a new paper--"Learning to Predict 3D Objects with an Interpolation-Based Renderer"--that details the creation of a new graphics tool called a differentiable interpolation-based renderer, or DIB-R, for short, which sounds only slightly less intimidating.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found