Fast Training of Neural Lumigraph Representations using Meta-learning-Supplementary Document-Alexander W. Bergman Petr Kellnhofer Gordon Wetzstein Stanford University
–Neural Information Processing Systems
As described in the main text, we plan to release all code used to obtain the results for our method. All data used has been made publicly available by their authors. We use PyTorch for all implementation, and evaluate all of our methods using our internal server consisting of four Nvidia Quadro RTX8000 GPUs and six Nvidia Quadro RTX6000 GPUs, which we used a subset of. Due to our limited resources and requirement of training our method and many baselines to convergence, we opt to report error bars with respect to multiple different testing scenes instead of different random seeds. Each of these evaluations is run with a randomly generated seed. Implementation details on network architectures and hyperparameters for the DTU [1, 2] and NLR [3] datasets are included in Sections 1.1 and 1.2 respectively. For each DTU scene, we use 7 of the ground truth 49 images for training.
Neural Information Processing Systems
Mar-3-2024, 06:01:29 GMT
- Country:
- Europe > Germany
- Baden-Württemberg > Freiburg (0.04)
- North America > United States
- California > Santa Clara County > Palo Alto (0.05)
- Europe > Germany
- Technology: