DiffCSS: Diverse and Expressive Conversational Speech Synthesis with Diffusion Models
wu, Weihao, Lin, Zhiwei, Zhou, Yixuan, Li, Jingbei, Niu, Rui, Wu, Qinghua, Cao, Songjun, Ma, Long, Wu, Zhiyong
–arXiv.org Artificial Intelligence
Conversational speech synthesis (CSS) aims to synthesize both contextually appropriate and expressive speech, and considerable efforts have been made to enhance the understanding of conversational context. However, existing CSS systems are limited to deterministic prediction, overlooking the diversity of potential responses. Moreover, they rarely employ language model (LM)-based TTS backbones, limiting the naturalness and quality of synthesized speech. To address these issues, in this paper, we propose DiffCSS, an innovative CSS framework that leverages diffusion models and an LM-based TTS backbone to generate diverse, expressive, and contextually coherent speech. A diffusion-based context-aware prosody predictor is proposed to sample diverse prosody embeddings conditioned on multimodal conversational context. Then a prosody-controllable LM-based TTS backbone is developed to synthesize high-quality speech with sampled prosody embeddings. Experimental results demonstrate that the synthesized speech from DiffCSS is more diverse, contextually coherent, and expressive than existing CSS systems
arXiv.org Artificial Intelligence
Feb-27-2025
- Genre:
- Research Report (0.70)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Speech > Speech Synthesis (0.89)
- Information Technology > Artificial Intelligence