Well File:


From Transparent to Opaque: Rethinking Neural Implicit Surfaces with ฮฑ-NeuS

Neural Information Processing Systems

Recent advances in neural radiance fields and its variants primarily address opaque or transparent objects, encountering difficulties to reconstruct both transparent and opaque objects simultaneously. This paper introduces ฮฑ-NeuS--an extension of NeuS--that proves NeuS is unbiased for materials from fully transparent to fully opaque. We find that transparent and opaque surfaces align with the non-negative local minima and the zero iso-surface, respectively, in the learned distance field of NeuS. Traditional iso-surfacing extraction algorithms, such as marching cubes, which rely on fixed iso-values, are ill-suited for such data. We develop a method to extract the transparent and opaque surface simultaneously based on DCUDF. To validate our approach, we construct a benchmark that includes both real-world and synthetic scenes, demonstrating its practical utility and effectiveness.


A Task Descriptions and Training Settings

Neural Information Processing Systems

We provide a detailed description of all tasks and some additional details on the training of MDEQ. CIFAR-10 is a well-known computer vision dataset that consists of 60,000 color images, each of size 32 32 [31]. There are 10 object classes and 6,000 images per class. The entire dataset is divided into training (50K images) and testing (10K) sets. We use two different training settings for evaluating the MDEQ model on CIFAR-10. Following Dupont et al. [18], we compare MDEQ-small with other implicit models on CIFAR-10 images without data augmentation (i.e., the original, raw images), using approximately 170K learnable parameters in the model.



Conservative Data Sharing for Multi-Task Offline Reinforcement Learning

Neural Information Processing Systems

Offline reinforcement learning (RL) algorithms have shown promising results in domains where abundant pre-collected data is available. However, prior methods focus on solving individual problems from scratch with an offline dataset without considering how an offline RL agent can acquire multiple skills. We argue that a natural use case of offline RL is in settings where we can pool large amounts of data collected in various scenarios for solving different tasks, and utilize all of this data to learn behaviors for all the tasks more effectively rather than training each one in isolation. However, sharing data across all tasks in multi-task offline RL performs surprisingly poorly in practice. Thorough empirical analysis, we find that sharing data can actually exacerbate the distributional shift between the learned policy and the dataset, which in turn can lead to divergence of the learned policy and poor performance. To address this challenge, we develop a simple technique for datasharing in multi-task offline RL that routes data based on the improvement over the task-specific data. We call this approach conservative data sharing (CDS), and it can be applied with multiple single-task offline RL methods. On a range of challenging multi-task locomotion, navigation, and vision-based robotic manipulation problems, CDS achieves the best or comparable performance compared to prior offline multitask RL methods and previous data sharing approaches.



Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift (Appendix)

Neural Information Processing Systems

It is widely adopted in out-of-distribution generalization literature [1, 2, 3, 4, 5, 6, 7] about the assumption that the relationship between labels and some parts of features is invariant across data distributions, and these subsets of features with such properties are called invariant features.



Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity

Neural Information Processing Systems

This paper considers the distributed convex-concave minimax optimization under the second-order similarity. We propose stochastic variance-reduced optimistic gradient sliding (SVOGS) method, which takes the advantage of the finite-sum structure in the objective by involving mini-batch client sampling and variance reduction.