Comparative Evaluation of Expressive Japanese Character Text-to-Speech with VITS and Style-BERT-VITS2
Rackauckas, Zackary, Hirschberg, Julia
–arXiv.org Artificial Intelligence
Synthesizing expressive Japanese character speech poses unique challenges due to pitch-accent sensitivity and stylistic variability. This paper empirically evaluates two open-source text-to-speech models--VITS and Style-BERT-VITS2 JP Extra (SBV2JE)--on in-domain, character-driven Japanese speech. Using three character-specific datasets, we evaluate models across naturalness (mean opinion and comparative mean opinion score), intelligibility (word error rate), and speaker consistency. SBV2JE matches human ground truth in naturalness (MOS 4.37 vs. 4.38), achieves lower WER, and shows slight preference in CMOS. Enhanced by pitch-accent controls and a WavLM-based discriminator, SBV2JE proves effective for applications like language learning and character dialogue generation, despite higher computational demands.
arXiv.org Artificial Intelligence
Dec-2-2025
- Country:
- Asia
- Japan > Honshū
- Chūbu > Aichi Prefecture
- Nagoya (0.04)
- Kansai (0.04)
- Kantō > Tokyo Metropolis Prefecture
- Tokyo (0.04)
- Chūbu > Aichi Prefecture
- Thailand > Chiang Mai
- Chiang Mai (0.04)
- Japan > Honshū
- Europe > France
- Île-de-France > Paris > Paris (0.04)
- North America > United States (0.04)
- Asia
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Speech > Speech Synthesis (0.75)
- Vision > Optical Character Recognition (0.63)
- Information Technology > Artificial Intelligence