AsynFusion: Towards Asynchronous Latent Consistency Models for Decoupled Whole-Body Audio-Driven Avatars
Zhang, Tianbao, Zhao, Jian, Li, Yuer, Zhu, Zheng, Hu, Ping, Fan, Zhaoxin, Wu, Wenjun, Li, Xuelong
–arXiv.org Artificial Intelligence
Whole-body audio-driven avatar pose and expression generation is a critical task for creating lifelike digital humans and enhancing the capabilities of interactive virtual agents, with wide-ranging applications in virtual reality, digital entertainment, and remote communication. Existing approaches often generate audio-driven facial expressions and gestures independently, which introduces a significant limitation: the lack of seamless coordination between facial and gestural elements, resulting in less natural and cohesive animations. To address this limitation, we propose AsynFusion, a novel framework that leverages diffusion transformers to achieve harmonious expression and gesture synthesis. The proposed method is built upon a dual-branch DiT architecture, which enables the parallel generation of facial expressions and gestures. Within the model, we introduce a Cooperative Synchronization Module to facilitate bidirectional feature interaction between the two modalities, and an Asynchronous LCM Sampling strategy to reduce computational overhead while maintaining high-quality outputs. Extensive experiments demonstrate that AsynFusion achieves state-of-the-art performance in generating real-time, synchronized whole-body animations, consistently outperforming existing methods in both quantitative and qualitative evaluations.
arXiv.org Artificial Intelligence
Oct-15-2025
- Country:
- Asia > China
- Beijing > Beijing (0.04)
- Zhejiang Province > Hangzhou (0.04)
- North America > United States (0.04)
- Asia > China
- Genre:
- Research Report (1.00)
- Technology: