Beyond Global Emotion: Fine-Grained Emotional Speech Synthesis with Dynamic Word-Level Modulation
Wang, Sirui, Chen, Andong, Zhao, Tiejun
–arXiv.org Artificial Intelligence
Emotional text-to-speech (E-TTS) is central to creating natural and trustworthy human-computer interaction. Existing systems typically rely on sentence-level control through predefined labels, reference audio, or natural language prompts. While effective for global emotion expression, these approaches fail to capture dynamic shifts within a sentence. To address this limitation, we introduce Emo-FiLM, a fine-grained emotion modeling framework for LLM-based TTS. Emo-FiLM aligns frame-level features from emotion2vec to words to obtain word-level emotion annotations, and maps them through a Feature-wise Linear Modulation (FiLM) layer, enabling word-level emotion control by directly modulating text embeddings. To support evaluation, we construct the Fine-grained Emotion Dynamics Dataset (FEDD) with detailed annotations of emotional transitions. Experiments show that Emo-FiLM outperforms existing approaches on both global and fine-grained tasks, demonstrating its effectiveness and generality for expressive speech synthesis.
arXiv.org Artificial Intelligence
Sep-26-2025
- Country:
- Asia
- Europe
- France > Auvergne-Rhône-Alpes
- Greece (0.05)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States > Louisiana
- Orleans Parish > New Orleans (0.04)
- Canada > Quebec
- Genre:
- Research Report > New Finding (0.47)
- Technology: