MPE-TTS: Customized Emotion Zero-Shot Text-To-Speech Using Multi-Modal Prompt
Wu, Zhichao, Kang, Yueteng, Cao, Songjun, Ma, Long, Li, Qiulin, Yang, Qun
–arXiv.org Artificial Intelligence
Most existing Zero-Shot Text-To-Speech(ZS-TTS) systems generate the unseen speech based on single prompt, such as reference speech or text descriptions, which limits their flexibility. We propose a customized emotion ZS-TTS system based on multi-modal prompt. The system disentangles speech into the content, timbre, emotion and prosody, allowing emotion prompts to be provided as text, image or speech. To extract emotion information from different prompts, we propose a multi-modal prompt emotion encoder. Additionally, we introduce an prosody predictor to fit the distribution of prosody and propose an emotion consistency loss to preserve emotion information in the predicted prosody. A diffusion-based acoustic model is employed to generate the target mel-spectrogram. Both objective and subjective experiments demonstrate that our system outperforms existing systems in terms of naturalness and similarity.
arXiv.org Artificial Intelligence
May-27-2025
- Country:
- Asia > China
- Jiangsu Province > Nanjing (0.04)
- Europe > Germany
- Bavaria > Upper Bavaria > Munich (0.04)
- Asia > China
- Genre:
- Research Report (0.64)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Large Language Model (0.73)
- Speech
- Speech Recognition (0.46)
- Speech Synthesis (0.51)
- Vision (0.90)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology