3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation
Zhang, Yi, Ji, Pengliang, Wang, Angtian, Mei, Jieru, Kortylewski, Adam, Yuille, Alan
–arXiv.org Artificial Intelligence
Regression-based methods for 3D human pose estimation directly predict the 3D pose parameters from a 2D image using deep networks. While achieving state-of-the-art performance on standard benchmarks, their performance degrades under occlusion. In contrast, optimization-based methods fit a parametric body model to 2D features in an iterative manner. The localized reconstruction loss can potentially make them robust to occlusion, but they suffer from the 2D-3D ambiguity. Motivated by the recent success of generative models in rigid object pose estimation, we propose 3D-aware Neural Body Fitting (3DNBF) - an approximate analysis-by-synthesis approach to 3D human pose estimation with SOTA performance and occlusion robustness. In particular, we propose a generative model of deep features based on a volumetric human representation with Gaussian ellipsoidal kernels emitting 3D pose-dependent feature vectors. The neural features are trained with contrastive learning to become 3D-aware and hence to overcome the 2D-3D ambiguity. Experiments show that 3DNBF outperforms other approaches on both occluded and standard benchmarks. Code is available at https://github.com/edz-o/3DNBF
arXiv.org Artificial Intelligence
Aug-19-2023
- Country:
- Asia > Middle East
- Israel (0.14)
- North America > United States (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Industry:
- Health & Medicine (0.70)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.93)
- Representation & Reasoning > Uncertainty (1.00)
- Robots > Humanoid Robots (0.85)
- Vision > Video Understanding (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence