Ask, Pose, Unite: Scaling Data Acquisition for Close Interactions with Vision Language Models
Bravo-Sánchez, Laura, Heo, Jaewoo, Weng, Zhenzhen, Wang, Kuan-Chieh, Yeung-Levy, Serena
–arXiv.org Artificial Intelligence
Social dynamics in close human interactions pose significant challenges for Human Mesh Estimation (HME), particularly due to the complexity of physical contacts and the scarcity of training data. Addressing these challenges, we introduce a novel data generation method that utilizes Large Vision Language Models (LVLMs) to annotate contact maps which guide test-time optimization to produce paired image and pseudo-ground truth meshes. This methodology not only alleviates the annotation burden but also enables the assembly of a comprehensive dataset specifically tailored for close interactions in HME. Our Ask Pose Unite (APU) dataset, comprising over 6.2k human mesh pairs in contact covering diverse interaction types, is curated from images depicting naturalistic person-to-person scenes. We empirically show that using our dataset to train a diffusion-based contact prior, used as guidance during optimization, improves mesh estimation on unseen interactions. Our work addresses longstanding challenges of data scarcity for close interactions in HME enhancing the field's capabilities of handling complex interaction scenarios.
arXiv.org Artificial Intelligence
Sep-30-2024
- Country:
- Asia (0.28)
- Europe (0.46)
- North America > United States (0.46)
- Genre:
- Research Report (1.00)
- Industry:
- Government > Regional Government (0.46)
- Information Technology > Security & Privacy (0.67)
- Law (0.93)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Large Language Model (0.46)
- Representation & Reasoning > Optimization (0.34)
- Vision (1.00)
- Information Technology > Artificial Intelligence