Goto

Collaborating Authors

 metasdf




MetaSDF: Meta-Learning Signed Distance Functions

Neural Information Processing Systems

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution. Generalizing across shapes with such neural implicit representations amounts to learning priors over the respective function space and enables geometry reconstruction from partial or noisy observations. Existing generalization methods rely on conditioning a neural network on a low-dimensional latent code that is either regressed by an encoder or jointly optimized in the auto-decoder framework. Here, we formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task. We demonstrate that this approach performs on par with auto-decoder based approaches while being an order of magnitude faster at test-time inference. We further demonstrate that the proposed gradient-based method outperforms encoder-decoder based methods that leverage pooling-based set encoders.


MetaSDF-Supplementary Material-Vincent Sitzmann

Neural Information Processing Systems

These authors contributed equally to this work. We now analyze a single layer of a neural network with conditioning via concatenation. Here, we provide exact specifications of the 2D experiments to ensure reproducibility. NP, 4-layer set encoder 101 .7/ 5. 1 154 . NP, 9-layer set encoder 92 .5 /2 .0



We are glad that the reviewers found

Neural Information Processing Systems

"motivation [...] very convincing and perfectly pitched to the reader" We believe that this will spur follow-up work benefitting both of these promising research directions. We have trained models on the ShapeNet "benches" class--please see qualitative We note that 2D results (Sec. Figure 1 of DeepSDF--see qualitative result in (c)--with no further fine-tuning or heuristics. We will add experiments and comparisons with further classes to the final manuscript. We will discuss DISN in-depth. We benchmark against this architecture (see submission Table 3, Figure 1).


Review for NeurIPS paper: MetaSDF: Meta-Learning Signed Distance Functions

Neural Information Processing Systems

Weaknesses: While the ideas in this paper are novel and the results promising, I have several concerns that I feel the authors should address. In the reconstruction from surface points only, auto-decoder methods such as DeepSDF naturally fail since the predicted function quickly degenerates to zero. That said, in downstream shape completion tasks, one almost certainly has access to some estimate for the surface normal, in which case auto-decoders can be used. I would like to see some kind of experiment evaluating partial shape reconstruction with surface information (see e.g. Figure 8 in DeepSDF). I think the partial shape completion experiment is important for two reasons: The first is that it shows MetaSDF in a real-world downstream task, something that is lacking in the current version of the paper.


Review for NeurIPS paper: MetaSDF: Meta-Learning Signed Distance Functions

Neural Information Processing Systems

This paper proposes to use a meta-learning tool for learning implicit shape representation to o better generalize across different shapes. Authors did a good job in the rebuttal which well answered most of reviewers' concerns, leading that two reviewers raised their scores. For the final version, I would like to suggest authors to include experiments for ShapeNet things.


MetaSDF: Meta-Learning Signed Distance Functions

Neural Information Processing Systems

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution. Generalizing across shapes with such neural implicit representations amounts to learning priors over the respective function space and enables geometry reconstruction from partial or noisy observations. Existing generalization methods rely on conditioning a neural network on a low-dimensional latent code that is either regressed by an encoder or jointly optimized in the auto-decoder framework. Here, we formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task. We demonstrate that this approach performs on par with auto-decoder based approaches while being an order of magnitude faster at test-time inference.