Progressive Multi-view Human Mesh Recovery with Self-Supervision
Gong, Xuan, Song, Liangchen, Zheng, Meng, Planche, Benjamin, Chen, Terrence, Yuan, Junsong, Doermann, David, Wu, Ziyan
–arXiv.org Artificial Intelligence
To date, little attention has been given to multi-view 3D human mesh estimation, despite real-life applicability (e.g., motion capture, sport analysis) and robustness to single-view ambiguities. Existing solutions typically suffer from poor generalization performance to new settings, largely due to the limited diversity of image-mesh pairs in multi-view training data. To address this shortcoming, people have explored the use of synthetic images. But besides the usual impact of visual gap between rendered and target data, synthetic-data-driven multi-view estimators also suffer from overfitting to the camera viewpoint distribution sampled during training which usually differs from real-world distributions. Tackling both challenges, we propose a novel simulation-based training pipeline for multi-view human mesh recovery, which (a) relies on intermediate 2D representations which are more robust to synthetic-to-real domain gap; (b) leverages learnable calibration and triangulation to adapt to more diversified camera setups; and (c) progressively aggregates multi-view information in a canonical 3D space to remove ambiguities in 2D representations. Through extensive benchmarking, we demonstrate the superiority of the proposed solution especially for unseen in-the-wild scenarios.
arXiv.org Artificial Intelligence
Dec-10-2022
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.64)
- Industry:
- Education (0.48)
- Health & Medicine (0.46)
- Technology: