NVIDIA proposes way of teaching robots depth perception, and how to turn 2D images into 3D models - 3D Printing Industry
A method of machine learning has proven capable of turning 2D images into 3D models. Created by researchers at multi-million-dollar GPU manufacturer NVIDIA, the framework shows that it is possible to infer shape, texture, and light from a single image, in a similar way to the workings of the human eye. "Close your left eye as you look at this screen. Now close your right eye and open your left," writes NVIDIA PR specialist Lauren Finkle on the company blog, "you'll notice that your field of vision shifts depending on which eye you're using. That's because while we see in two dimensions, the images captured by your retinas are combined to provide depth and produce a sense of three-dimensionality." Termed a differentiable interpolation-based renderer, or DIB-R, the NVIDIA rendering framework has the potential to aid, and accelerate various areas of 3D design and robotics, rendering 3D models in a matter of seconds.
Dec-11-2019, 13:28:54 GMT
- Industry:
- Information Technology > Hardware (1.00)
- Technology:
- Information Technology > Artificial Intelligence > Robots (0.92)