EXPOTION: Facial Expression and Motion Control for Multimodal Music Generation
Izzati, Fathinah, Li, Xinyue, Xia, Gus
–arXiv.org Artificial Intelligence
We propose Expotion (Facial Expression and Motion Control for Multimodal Music Generation), a generative model leveraging multimodal visual controls - specifically, human facial expressions and upper-body motion - as well as text prompts to produce expressive and temporally accurate music. We adopt parameter-efficient fine-tuning (PEFT) on the pretrained text-to-music generation model, enabling fine-grained adaptation to the multimodal controls using a small dataset. To ensure precise synchronization between video and music, we introduce a temporal smoothing strategy to align multiple modalities. Experiments demonstrate that integrating visual features alongside textual descriptions enhances the overall quality of generated music in terms of musicality, creativity, beat-tempo consistency, temporal alignment with the video, and text adherence, surpassing both proposed baselines and existing state-of-the-art video-to-music generation models. Additionally, we introduce a novel dataset consisting of 7 hours of synchronized video recordings capturing expressive facial and upper-body gestures aligned with corresponding music, providing significant potential for future research in multimodal and interactive music generation.
arXiv.org Artificial Intelligence
Jul-8-2025
- Country:
- Asia
- Middle East > UAE (0.04)
- South Korea > Daejeon
- Daejeon (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Leisure & Entertainment (1.00)
- Media > Music (1.00)
- Technology: