Goto

Collaborating Authors

 Kim, Sangjin


Encoding Speaker-Specific Latent Speech Feature for Speech Synthesis

arXiv.org Artificial Intelligence

In this work, we propose a novel method for modeling numerous speakers, which enables expressing the overall characteristics of speakers in detail like a trained multi-speaker model without additional training on the target speaker's dataset. Although various works with similar purposes have been actively studied, their performance has not yet reached that of trained multi-speaker models due to their fundamental limitations. To overcome previous limitations, we propose effective methods for feature learning and representing target speakers' speech characteristics by discretizing the features and conditioning them to a speech synthesis model. Our method obtained a significantly higher similarity mean opinion score (SMOS) in subjective similarity evaluation than seen speakers of a best-performing multispeaker model, even with unseen speakers. The proposed method also outperforms a zero-shot method by significant margins. Furthermore, our method shows remarkable performance in generating new artificial speakers. In addition, we demonstrate that the encoded latent features are sufficiently informative to reconstruct an original speaker's speech completely. It implies that our method can be used as a general methodology to encode and reconstruct speakers' characteristics in various tasks. Recently, research on modeling numerous speakers in the real world has been actively studied. Previous works (Gibiansky et al., 2017; Ping et al., 2018; Chen et al., 2020; Kim et al., 2020; 2021) used a trainable speaker embedding matrix to learn the speech characteristics of each speaker in one model to model multiple speakers effectively; this is commonly referred to as multi-speaker speech synthesis. Because the method enables a similar expression of each speaker's characteristics and the sharing of common information among speakers, it is effective in synthesizing the speech of multiple speakers in high quality with relatively less training data than training each speaker in one model. However, the model must be trained for all speakers whenever a new speaker is added, and synthesizing high-quality speech may not be possible for speakers with a relatively small dataset.


VITS2: Improving Quality and Efficiency of Single-Stage Text-to-Speech with Adversarial Learning and Architecture Design

arXiv.org Artificial Intelligence

Single-stage text-to-speech models have been actively studied recently, and their results have outperformed two-stage pipeline systems. Although the previous single-stage model has made great progress, there is room for improvement in terms of its intermittent unnaturalness, computational efficiency, and strong dependence on phoneme conversion. In this work, we introduce VITS2, a single-stage text-to-speech model that efficiently synthesizes a more natural speech by improving several aspects of the previous work. We propose improved structures and training mechanisms and present that the proposed methods are effective in improving naturalness, similarity of speech characteristics in a multi-speaker model, and efficiency of training and inference. Furthermore, we demonstrate that the strong dependence on phoneme conversion in previous works can be significantly reduced with our method, which allows a fully end-to-end single-stage approach.