Goto

Collaborating Authors

 dreamsparse


DreamSparse: EscapingfromPlato'sCavewith2D DiffusionModelGivenSparseViews

Neural Information Processing Systems

Recent works [76, 33, 70, 72, 17, 7, 66, 20] started to explore sparse-view novel view synthesis, specifically focusing on generating novel views from alimited number of input images (typically 2-3) with known camera poses. Some of them [33,70,72,17,7] introduce additional priors into NeRF, e.g.


DreamSparse: Escaping from Plato's Cave with 2D Diffusion Model Given Sparse Views

Neural Information Processing Systems

Synthesizing novel view images from a few views is a challenging but practical problem. Existing methods often struggle with producing high-quality results or necessitate per-object optimization in such few-view settings due to the insufficient information provided. In this work, we explore leveraging the strong 2D priors in pre-trained diffusion models for synthesizing novel view images.


DreamSparse: Escaping from Plato's Cave with 2D Diffusion Model Given Sparse Views

Neural Information Processing Systems

Synthesizing novel view images from a few views is a challenging but practical problem. Existing methods often struggle with producing high-quality results or necessitate per-object optimization in such few-view settings due to the insufficient information provided. In this work, we explore leveraging the strong 2D priors in pre-trained diffusion models for synthesizing novel view images. To address these problems, we propose \textit{DreamSparse}, a framework that enables the frozen pre-trained diffusion model to generate geometry and identity-consistent novel view images. Specifically, DreamSparse incorporates a geometry module designed to capture features about spatial information from sparse views as a 3D prior. Subsequently, a spatial guidance model is introduced to convert rendered feature maps as spatial information for the generative process.