instant nerf
Characterizing Satellite Geometry via Accelerated 3D Gaussian Splatting
Nguyen, Van Minh, Sandidge, Emma, Mahendrakar, Trupti, White, Ryan T.
The accelerating deployment of spacecraft in orbit have generated interest in on-orbit servicing (OOS), inspection of spacecraft, and active debris removal (ADR). Such missions require precise rendezvous and proximity operations in the vicinity of non-cooperative, possible unknown, resident space objects. Safety concerns with manned missions and lag times with ground-based control necessitate complete autonomy. This requires robust characterization of the target's geometry. In this article, we present an approach for mapping geometries of satellites on orbit based on 3D Gaussian Splatting that can run on computing resources available on current spaceflight hardware. We demonstrate model training and 3D rendering performance on a hardware-in-the-loop satellite mock-up under several realistic lighting and motion conditions. Our model is shown to be capable of training on-board and rendering higher quality novel views of an unknown satellite nearly 2 orders of magnitude faster than previous NeRF-based algorithms. Such on-board capabilities are critical to enable downstream machine intelligence tasks necessary for autonomous guidance, navigation, and control tasks.
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- North America > United States > New York > New York County > New York City (0.04)
Instant NeRF Wins SIGGRAPH Best Paper, Inspires Creators
Since its debut earlier this year, tens of thousands of developers around the world have downloaded the source code and used it to render spectacular scenes, sharing eye-catching results on social media. The research behind Instant NeRF is being honored as a best paper at SIGGRAPH -- which runs Aug. 8-11 in Vancouver and online -- for its contribution to the future of computer graphics research. One of just five papers selected for this award, it's among 17 papers and workshops with NVIDIA authors that are being presented at the conference, covering topics spanning neural rendering, 3D simulation, holography and more. NVIDIA recently held an Instant NeRF sweepstakes, asking developers to share 3D scenes created with the software for a chance to win a high-end NVIDIA GPU. Hundreds participated, posting 3D scenes of landmarks like Stonehenge, their backyards and even their pets.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- North America > Canada > Ontario > Toronto (0.05)
NeRF Research Turns 2D Photos Into 3D Scenes
When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds. Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The NVIDIA Research team has developed an approach that accomplishes this task almost instantly -- making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering. NVIDIA applied this approach to a popular new technology called neural radiance fields, or NeRF.
NVIDIA's Instant NeRF: transforming 2D images into 3D scenes in record time - Actu IA
Instant NeRF, a neural network-based technology capable of transforming a set of 2D photos into high-resolution 3D scenes in seconds, was introduced at an NVIDIA GTC session in March. According to the NVIDIA Research team, this would be one of the first models of its kind to combine ultra-fast neural network training and fast rendering. In its press release, NVIDIA recalls the technological revolution that Edwin Land brought on February 21, 1947 by producing an instant photo with a polaroid camera. NVIDIA Research pays tribute to him by recreating an iconic photo of Andy Warhol taking an instant photo, transforming it into a 3D scene using Instant NeRF. Artificial intelligence researchers at NVIDIA Research took the opposite approach with the goal of transforming a set of still images into a 3D digital scene in seconds.
Photos to 3D Scenes in Milliseconds
As if taking a picture wasn't a challenging enough technological prowess, we are now doing the opposite: modeling the world from pictures. I've covered amazing AI-based models that could take images and turn them into high-quality scenes. A challenging task consists of taking a few images in the 2-dimensional picture world to create how the object or person would look in the real world. You can easily see how useful this technology is for many industries like video games, animation movies, or advertising. Take a few pictures and instantly have a realistic model to insert into your product.
This AI recreated a whole virtual San Francisco from 2.8 million photos
AI-generated imagery and 3D content have come a long way in a very short space of time. It was only two years ago that Google researchers revealed NeRF, or Neural Radiance Fields, and less than two weeks ago NVIDIA blew us away with almost real-time generation of 3D scenes from just a few dozen still photographs using their "Instant NeRF" techniques. Well, now, a new paper has been released by the folks at Waymo describing "Block-NeRF", a technique for "scalable large scene neural view synthesis" – basically, generating really really large environments. And this video, Károly Zsolnai-Fehér at Two Minute Papers explains how it all works. It's a very impressive achievement, and while it's massively ahead of where NeRF technology was just two years ago, it still isn't quite perfect.
Nvidia shows off AI model that turns a few dozen snapshots into a 3D-rendered scene
Nvidia's latest AI demo is pretty impressive: a tool that quickly turns a "few dozen" 2D snapshots into a 3D-rendered scene. In the video below you can see the method in action, with a model dressed like Andy Warhol holding an old-fashioned Polaroid camera. The tool is called Instant NeRF, referring to "neural radiance fields" -- a technique developed by researchers from UC Berkeley, Google Research, and UC San Diego in 2020. If you want a detailed explainer of neural radiance fields, you can read one here, but in short, the method maps the color and light intensity of different 2D shots, then generates data to connect these images from different vantage points and render a finished 3D scene. In addition to images, the system requires data about the position of the camera. Researchers have been improving this sort of 2D-to-3D model for a couple of years now, adding more detail to finished renders and increasing rendering speed.
NVIDIA's NeRF AI instantly turns 2D photos into 3D objects
A new technology called Neural Radiance Field or NeRF involves training AI algorithms to enable the creation of 3D objects from two-dimensional photos. NeRF has the capability to fill in the blanks, so to speak, by interpolating what the 2D photos didn't capture. It's a neat trick that could lead to advances in various fields, such as video games and autonomous driving. Now, NVIDIA has developed a new NeRF technique -- the fastest one to date, the company claims -- that only needs seconds to train and to generate a 3D scene. It only takes seconds to train the model, called Instant NeRF, using dozens of still photos and the camera angles they were taken from. Like other NeRF techniques, it requires images taken from multiple positions.
- Information Technology > Artificial Intelligence > Vision (0.59)
- Information Technology > Artificial Intelligence > Robots (0.42)