Allo-AVA: A Large-Scale Multimodal Conversational AI Dataset for Allocentric Avatar Gesture Animation
–arXiv.org Artificial Intelligence
The scarcity of high-quality, multimodal training data severely hinders the creation of lifelike avatar animations for conversational AI in virtual environments. Existing datasets often lack the intricate synchronization between speech, facial expressions, and body movements that characterize natural human communication. To address this critical gap, we introduce Allo-AVA, a large-scale dataset specifically designed for text and audio-driven avatar gesture animation in an allocentric (third person point-of-view) context. Allo-AVA consists of $\sim$1,250 hours of diverse video content, complete with audio, transcripts, and extracted keypoints. Allo-AVA uniquely maps these keypoints to precise timestamps, enabling accurate replication of human movements (body and facial gestures) in synchronization with speech. This comprehensive resource enables the development and evaluation of more natural, context-aware avatar animation models, potentially transforming applications ranging from virtual reality to digital assistants.
arXiv.org Artificial Intelligence
Oct-21-2024
- Country:
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Genre:
- Research Report (0.82)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.46)
- Statistical Learning (0.68)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Vision > Face Recognition (0.66)
- Machine Learning
- Graphics > Animation (1.00)
- Human Computer Interaction > Interfaces (1.00)
- Artificial Intelligence
- Information Technology