Wang, Pengfei
Neural-IMLS: Learning Implicit Moving Least-Squares for Surface Reconstruction from Unoriented Point clouds
Wang, Zixiong, Wang, Pengfei, Dong, Qiujie, Gao, Junjie, Chen, Shuangmin, Xin, Shiqing, Tu, Changhe
Surface reconstruction from noisy, non-uniformly, and unoriented point clouds is a fascinating yet difficult problem in computer vision and computer graphics. In this paper, we propose Neural-IMLS, a novel approach that learning noise-resistant signed distance function (SDF) for reconstruction. Instead of explicitly learning priors with the ground-truth signed distance values, our method learns the SDF from raw point clouds directly in a self-supervised fashion by minimizing the loss between the couple of SDFs, one obtained by the implicit moving least-square function (IMLS) and the other by our network. Finally, a watertight and smooth 2-manifold triangle mesh is yielded by running Marching Cubes. We conduct extensive experiments on various benchmarks to demonstrate the performance of Neural-IMLS, especially for point clouds with noise.
Stable Learning via Self-supervised Invariant Risk Minimization
Yu, Zhengxu, Wang, Pengfei, Xu, Junkai, Xie, Liang, Jin, Zhongming, Huang, Jianqiang, He, Xiaofei, Cai, Deng, Hua, Xian-Sheng
Empirical Risk Minimization based methods are based on the consistency hypothesis that all data samples are generated i.i.d. However, this hypothesis cannot hold in many real-world applications. Consequently, simply minimizing training loss can lead the model into recklessly absorbing all statistical correlations in the training dataset. It is why a well-trained model may perform unstably in different testing environments. Hence, learning a stable predictor that can simultaneously performs well in all testing environments is important for machine learning tasks. In this work, we study this problem from the perspective of Invariant Risk Minimization. Specifically, we propose a novel Self-supervised Invariant Risk Minimization method based on the fact that the real causality connections between features are consistent no matter how the environment changes. First, we propose a self-supervised invariant representation learning objective function, which aims to learn a stable representation of the consistent causality. Based on that, we further propose a stable predictor training algorithm. This algorithm aims to improve the predictor's stability using the invariant representation learned by using our proposed objective function. We conduct extensive experiments on both synthetic and real-world datasets to show that our proposal outperforms previous state-of-the-art stable learning methods. The code will be released later.