Learning Generalizable Light Field Networks from Few Images

Li, Qian, Multon, Franck, Boukhayma, Adnane

arXiv.org Artificial Intelligence 

We explore a new strategy for few-shot novel view synthesis based on a neural light field representation. Given a target camera pose, an implicit neural network maps each ray to its target pixel's color directly. The network is conditioned on local ray features generated by coarse volumetric rendering from an explicit 3D feature volume. This volume is built from the input images using a 3D ConvNet. Our method achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance field based competition, while offering a 100 times faster rendering. Figure 1: Our method enables fast generation of novel views from sparse input images without 3D supervision in training.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found