virtual san francisco
This AI recreated a whole virtual San Francisco from 2.8 million photos
AI-generated imagery and 3D content have come a long way in a very short space of time. It was only two years ago that Google researchers revealed NeRF, or Neural Radiance Fields, and less than two weeks ago NVIDIA blew us away with almost real-time generation of 3D scenes from just a few dozen still photographs using their "Instant NeRF" techniques. Well, now, a new paper has been released by the folks at Waymo describing "Block-NeRF", a technique for "scalable large scene neural view synthesis" – basically, generating really really large environments. And this video, Károly Zsolnai-Fehér at Two Minute Papers explains how it all works. It's a very impressive achievement, and while it's massively ahead of where NeRF technology was just two years ago, it still isn't quite perfect.
Google and Waymo used driverless cars to make a virtual San Francisco
Software can analyse millions of static photos of city streets taken atop cars and construct a realistic 3D model that could be used to create immersive maps or even train driverless cars safely in a virtual environment. Block-NeRF was created by a team of researchers at driverless car company Waymo and Google Research, which are both owned by Alphabet. The tool uses vast numbers of photos taken by cameras mounted atop Waymo's autonomous cars and builds numerous small 3D models, each covering just over one city block.
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)