test image
UP-NeRF: Unconstrained Pose-Prior-Free Neural Radiance Fields (Supplement)
In this supplementary material, we provide additional implementation details (Appendix A) of our model and visualization of ablation studies (Appendix B) which are not included in our main paper. BARF-W, and BARF-WD are based on [2] because there is no official NeRF-W code available. The detailed architecture of UP-NeRF is shown in the Figure 1. First two authors have an equal contribution. As we mentioned in the main paper, the evaluation process entails two stages, which are test-time pose optimization and appearance optimization.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > China > Jiangsu Province (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Maine (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (2 more...)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.46)
- Health & Medicine (0.46)
- Transportation > Passenger (0.46)
- Transportation > Ground > Road (0.46)
- (2 more...)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
Adaptive Denoising via GainTuning
Deep convolutional neural networks (CNNs) for image denoising are typically trained on large datasets. These models achieve the current state of the art, but they do not generalize well to data that deviate from the training distribution. Recent work has shown that it is possible to train denoisers on a single noisy image. These models adapt to the features of the test image, but their performance is limited by the small amount of information used to train them. Here we propose GainTuning'', a methodology by which CNN models pre-trained on large datasets can be adaptively and selectively adjusted for individual test images. To avoid overfitting, GainTuning optimizes a single multiplicative scaling parameter (the "Gain") of each channel in the convolutional layers of the CNN. We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set. These adaptive improvements are even more substantial for test images differing systematically from the training data, either in noise level or image type.
Efficient Test-Time Adaptation for Super-Resolution with Second-Order Degradation and Reconstruction
Image super-resolution (SR) aims to learn a mapping from low-resolution (LR) to high-resolution (HR) using paired HR-LR training images. Conventional SR methods typically gather the paired training data by synthesizing LR images from HR images using a predetermined degradation model, e.g., Bicubic down-sampling.
SSA-Seg: Semantic and Spatial Adaptive Pixel-level Classifier for Semantic Segmentation
Vanilla pixel-level classifiers for semantic segmentation are based on a certain paradigm, involving the inner product of fixed prototypes obtained from the training set and pixel features in the test image. This approach, however, encounters significant limitations, i.e., feature deviation in the semantic domain and information loss in the spatial domain. The former struggles with large intra-class variance among pixel features from different images, while the latter fails to utilize the structured information of semantic objects effectively. This leads to blurred mask boundaries as well as a deficiency of fine-grained recognition capability. In this paper, we propose a novel Semantic and Spatial Adaptive Classifier (SSA-Seg) to address the above challenges. Specifically, we employ the coarse masks obtained from the fixed prototypes as a guide to adjust the fixed prototype towards the center of the semantic and spatial domains in the test image. The adapted prototypes in semantic and spatial domains are then simultaneously considered to accomplish classification decisions. In addition, we propose an online multi-domain distillation learning strategy to improve the adaption process. Experimental results on three publicly available benchmarks show that the proposed SSA-Seg significantly improves the segmentation performance of the baseline models with only a minimal increase in computational cost.