Sampling 3D Gaussian Scenes in Seconds with Latent Diffusion Models
Henderson, Paul, de Almeida, Melonie, Ivanova, Daniela, Anciukevičius, Titas
–arXiv.org Artificial Intelligence
We present a latent diffusion model over 3D scenes, that can be trained using only 2D image data. To achieve this, we first design an autoencoder that maps multi-view images to 3D Gaussian splats, and simultaneously builds a compressed latent representation of these splats. Then, we train a multi-view diffusion model over the latent space to learn an efficient generative model. This pipeline does not require object masks nor depths, and is suitable for complex scenes with arbitrary camera positions. We conduct careful experiments on two large-scale datasets of complex real-world scenes - MVImgNet and RealEstate10K. We show that our approach enables generating 3D scenes in as little as 0.2 seconds, either from scratch, from a single input view, or from sparse input views. It produces diverse and high-quality results while running an order of magnitude faster than non-latent diffusion models and earlier NeRF-based generative models.
arXiv.org Artificial Intelligence
Jun-18-2024
- Country:
- Asia
- Japan > Honshū
- Chūbu
- Ishikawa Prefecture > Kanazawa (0.04)
- Nagano Prefecture > Nagano (0.04)
- Chūbu
- Middle East > Israel
- Tel Aviv District > Tel Aviv (0.04)
- Japan > Honshū
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > Canada
- Alberta > Census Division No. 15 > Improvement District No. 9 > Banff (0.04)
- Asia
- Genre:
- Research Report (0.82)
- Industry:
- Media
- Film (0.34)
- Photography (0.34)
- Television (0.34)
- Media
- Technology: