Goto

Collaborating Authors

 training time





Volume Feature Rendering for Fast Neural Radiance Field Reconstruction

Neural Information Processing Systems

Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives. In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density. With the aid of geometry guides either in the form of occupancy grids or proposal networks, the number of color neural network evaluations can be reduced from hundreds to dozens in the standard volume rendering framework.



Appendix [KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training ] Anonymous Author(s) Affiliation Address email Appendix A. Proof of Lemma 1

Neural Information Processing Systems

Table 1 summarizes the models and datasets used in this work. ImageNet-1K Deng u. a. (2009): We use the subset of the ImageNet dataset containing DeepCAM Kurth u. a. (2018): DeepCAM dataset for image segmentation, which consists of Fractal-3K Kataoka u. a. (2022) A rendered dataset from the Visual Atom method Kataoka We also use the setting in Kataoka u. a. (2022) Table 2 shows the detail of our hyper-parameters. Specifically, We follow the guideline of'TorchVision' to train the ResNet-50 that uses the CosineLR To show the robustness of KAKURENBO, we also train ResNet-50 with different settings, e.g., ResNet-50 (A) setting, we follow the hyper-parameters reported in Goyal u. a. (2017). It is worth noting that KAKURENBO merely hides samples before the input pipeline. In this section, we present an analysis of the factors affecting KAKURENBO's performance, e.g., the The result shows that our method could dynamically hide the samples at each epoch.


KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training

Neural Information Processing Systems

This paper proposes a method for hiding the least-important samples during the training of deep neural networks to increase efficiency, i.e., to reduce the cost of