Google AI 'Translatotron' Can Make Anyone a Real-Time Polyglot

#artificialintelligence

Google AI yesterday released its latest research result in speech-to-speech translation, the futuristic-sounding "Translatotron." Billed as the world's first end-to-end speech-to-speech translation model, Translatotron promises the potential for real-time cross-linguistic conversations with low latency and high accuracy. Humans have always dreamed of a voice-based device that could enable them to simply leap over language barriers. While advances in deep learning have contributed to highly improved accuracy in speech recognition and machine translation, smooth conversations between different language speakers remained hampered by unnatural pauses during machine processing. Google's wireless headphone Pixel Bud released in 2017 boasted real-time speech translation, but users found the practical experience less then satisfying.


Amazing Google AI speaks another language in your voice

#artificialintelligence

On Wednesday, Google unveiled Translatotron, an in-development speech-to-speech translation system. It's not the first system to translate speech from one language to another, but Google designed Translatotron to do something other systems can't: retain the original speaker's voice in the translated audio. In other words, the tech could make it sound like you're speaking a language you don't know -- a remarkable step forward on the path to breaking down the global language barrier. According to Google's AI blog, most speech-to-speech translation systems follow a three-step process. First they transcribe the speech.


Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis

Neural Information Processing Systems

We describe a neural network-based system for text-to-speech (TTS) synthesis that is able to generate speech audio in the voice of many different speakers, including those unseen during training. Our system consists of three independently trained components: (1) a speaker encoder network, trained on a speaker verification task using an independent dataset of noisy speech from thousands of speakers without transcripts, to generate a fixed-dimensional embedding vector from seconds of reference speech from a target speaker; (2) a sequence-to-sequence synthesis network based on Tacotron 2, which generates a mel spectrogram from text, conditioned on the speaker embedding; (3) an auto-regressive WaveNet-based vocoder that converts the mel spectrogram into a sequence of time domain waveform samples. We demonstrate that the proposed model is able to transfer the knowledge of speaker variability learned by the discriminatively-trained speaker encoder to the new task, and is able to synthesize natural speech from speakers that were not seen during training. We quantify the importance of training the speaker encoder on a large and diverse speaker set in order to obtain the best generalization performance. Finally, we show that randomly sampled speaker embeddings can be used to synthesize speech in the voice of novel speakers dissimilar from those used in training, indicating that the model has learned a high quality speaker representation.


Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis

Neural Information Processing Systems

We describe a neural network-based system for text-to-speech (TTS) synthesis that is able to generate speech audio in the voice of many different speakers, including those unseen during training. Our system consists of three independently trained components: (1) a speaker encoder network, trained on a speaker verification task using an independent dataset of noisy speech from thousands of speakers without transcripts, to generate a fixed-dimensional embedding vector from seconds of reference speech from a target speaker; (2) a sequence-to-sequence synthesis network based on Tacotron 2, which generates a mel spectrogram from text, conditioned on the speaker embedding; (3) an auto-regressive WaveNet-based vocoder that converts the mel spectrogram into a sequence of time domain waveform samples. We demonstrate that the proposed model is able to transfer the knowledge of speaker variability learned by the discriminatively-trained speaker encoder to the new task, and is able to synthesize natural speech from speakers that were not seen during training. We quantify the importance of training the speaker encoder on a large and diverse speaker set in order to obtain the best generalization performance. Finally, we show that randomly sampled speaker embeddings can be used to synthesize speech in the voice of novel speakers dissimilar from those used in training, indicating that the model has learned a high quality speaker representation.


Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning

arXiv.org Artificial Intelligence

We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training ten times faster. We scale Deep Voice 3 to data set sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on one single-GPU server.