UtterTune: LoRA-Based Target-Language Pronunciation Edit and Control in Multilingual Text-to-Speech

Kato, Shuhei

arXiv.org Artificial Intelligence 

ABSTRACT We propose UtterT une, a lightweight adaptation method that fine-tunes a multilingual text-to-speech (TTS) system based on a large language model (LLM) architecture, designed to enhance the controllability of pronunciation in a target language while preserving performance in others. While LLM architectures have enabled TTS models to achieve remarkable naturalness, accurately modeling grapheme-to-phoneme (G2P) mapping and prosody remains challenging, especially when the model omits an explicit G2P module and directly processes minimally encoded text (e.g., byte-pair encoding). UtterTune leverages low-rank adaptation to enable the control of segmental pronunciation and pitch accent at the phoneme level for Japanese speech, the target language in this paper, while maintaining naturalness and speaker similarity in a zero-shot setting. Objective and subjective evaluations confirm its effectiveness. Index T erms-- text-to-speech, large language model, low-rank adaptation, pronunciation, controllability 1. INTRODUCTION Text-to-speech (TTS) models based on large language model (LLM) architecture (LLM-TTS in this paper) have demonstrated exceptional naturalness, especially in zero-shot multi-speaker and multilingual synthesis, leading the way in speech synthesis technology [1, 2, 3, 4]; however, reproducing accurate pronunciation remains challenging. Some multilingual LLM-TTS, such as CosyV oice 2 [4], are designed to take raw text (characters) as input and tokenize it via byte-pair encoding (BPE) [5], without explicit phonemic or prosody markers. This design contrasts with conventional neural sequence-to-sequence TTS, which typically converts input text into phonemes (grapheme-to-phoneme; G2P) and prosody information, if needed, using a text frontend before feeding it into the model [6, 7, 8, 1]. On the other hand, such models require a large amount of speech-text pairs that cover the diversity of the target language because they predict segmental pronunciation and prosody data-driven.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found