Goto

Collaborating Authors

 atlasnet



Review for NeurIPS paper: 3D Shape Reconstruction from Vision and Touch

Neural Information Processing Systems

I would suggest removing this claim -- in machine learning we seem to anthropomorphize our algorithms with little evidence. "...touch provides high fidelity localized information while vision provides complementary global context" A counterexample to this claim is the case of congenitally blind people who seem to have no problem describing global context of things they touch. See "Imagery in the congenitally blind: How visual are visual images?", Zimler and Keenan 1983 - The way the paper presents the idea of using charts makes it seem like it is a novel contribution, but in reality it is built on top of AtlasNet, who also use the term chart to describe their method. In fact, a follow up paper to AtlasNet [a] generalizes the charts idea even further which the paper does not cite. Therefore, I would suggest toning down statements that make it seem like this is a novel contribution such as "...which we call charts."


2nd Place Solution for IJCAI-PRICAI 2020 3D AI Challenge: 3D Object Reconstruction from A Single Image

Cao, Yichen, Wei, Yufei, Liu, Shichao, Xu, Lin

arXiv.org Artificial Intelligence

In this paper, we present our solution for the {\it IJCAI--PRICAI--20 3D AI Challenge: 3D Object Reconstruction from A Single Image}. We develop a variant of AtlasNet that consumes single 2D images and generates 3D point clouds through 2D to 3D mapping. To push the performance to the limit and present guidance on crucial implementation choices, we conduct extensive experiments to analyze the influence of decoder design and different settings on the normalization, projection, and sampling methods. Our method achieves 2nd place in the final track with a score of $70.88$, a chamfer distance of $36.87$, and a mean f-score of $59.18$. The source code of our method will be available at https://github.com/em-data/Enhanced_AtlasNet_3DReconstruction.


Learning elementary structures for 3D shape generation and matching

Deprelle, Theo, Groueix, Thibault, Fisher, Matthew, Kim, Vladimir G., Russell, Bryan C., Aubry, Mathieu

arXiv.org Artificial Intelligence

We propose to represent shapes as the deformation and combination of learnable elementary 3D structures, which are primitives resulting from training over a collection of shape. We demonstrate that the learned elementary 3D structures lead to clear improvements in 3D shape generation and matching. More precisely, we present two complementary approaches for learning elementary structures: (i) patch deformation learning and (ii) point translation learning. Both approaches can be extended to abstract structures of higher dimensions for improved results. We evaluate our method on two tasks: reconstructing ShapeNet objects and estimating dense correspondences between human scans (FAUST inter challenge). We show 16% improvement over surface deformation approaches for shape reconstruction and outperform FAUST inter challenge state of the art by 6%.