Multiple View Geometry Transformers for 3D Human Pose Estimation
Liao, Ziwei, Zhu, Jialiang, Wang, Chunyu, Hu, Han, Waslander, Steven L.
–arXiv.org Artificial Intelligence
In this work, we aim to improve the 3D reasoning ability of Transformers in multi-view 3D human pose estimation. Recent works have focused on end-to-end learning-based transformer designs, which struggle to resolve geometric information accurately, particularly during occlusion. Instead, we propose a novel hybrid model, MVGFormer, which has a series of geometric and appearance modules organized in an iterative manner. The geometry modules are learning-free and handle all viewpoint-dependent 3D tasks geometrically which notably improves the model's generalization ability. The appearance modules are learnable and are dedicated to estimating 2D poses from image signals end-to-end which enables them to achieve accurate estimates even when occlusion occurs, leading to a model that is both accurate and generalizable to new cameras and geometries. We evaluate our approach for both in-domain and out-of-domain settings, where our model consistently outperforms state-of-the-art methods, and especially does so by a significant margin in the out-of-domain setting. We will release the code and models: https://github.com/XunshanMan/MVGFormer.
arXiv.org Artificial Intelligence
Nov-18-2023
- Country:
- Asia > Middle East
- Israel (0.14)
- North America > Canada
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.46)
- Statistical Learning (0.46)
- Robots > Humanoid Robots (0.62)
- Vision > Video Understanding (0.73)
- Machine Learning
- Information Technology > Artificial Intelligence