Speech Synthesis


Alexa will soon be able to read the news just like a professional

#artificialintelligence

Amazon's Alexa continues to learn new party tricks, with the latest being a "newscaster style" speaking voice that will be launching on enabled devices in a few weeks' time. You can listen to samples of the speaking style below, and the results, well, they speak for themselves. The voice can't be mistaken for a human, but it does incorporate stresses into sentences in the same way you'd expect from a TV or radio newscaster. According to Amazon's own surveys, users prefer it to Alexa's regular speaking style when listening to articles (though getting news from smart speakers still has lots of other problems). Amazon says the new speaking style is enabled by the company's development of "neural text-to-speech" technology or NTTS.


WaveGlow: A Flow-based Generative Network for Speech Synthesis

arXiv.org Artificial Intelligence

In this paper we propose WaveGlow: a flow-based network capable of generating high quality speech from mel-spectrograms. WaveGlow combines insights from Glow and WaveNet in order to provide fast, efficient and high-quality audio synthesis, without the need for auto-regression. WaveGlow is implemented using only a single network, trained using only a single cost function: maximizing the likelihood of the training data, which makes the training procedure simple and stable. Our PyTorch implementation produces audio samples at a rate of more than 500 kHz on an NVIDIA V100 GPU. Mean Opinion Scores show that it delivers audio quality as good as the best publicly available WaveNet implementation. All code will be made publicly available online.


Investigation of enhanced Tacotron text-to-speech synthesis systems with self-attention for pitch accent language

arXiv.org Machine Learning

ABSTRACT End-to-end speech synthesis is a promising approach that directly converts raw text to speech. Although it was shown that Tacotron2 outperforms classical pipeline systems with regards to naturalness in English, its applicability to other languages is still unknown. Japanese could be one of the most difficult languages for which to achieve end-to-end speech synthesis, largely due to its character diversity and pitch accents. Therefore, state-of-theart systems are still based on a traditional pipeline framework that requires a separate text analyzer and duration model. Towards endto-end Japanese speech synthesis, we extend Tacotron to systems with self-attention to capture long-term dependencies related to pitch accents and compare their audio quality with classical pipeline systems under various conditions to show their pros and cons. In a large-scale listening test, we investigated the impacts of the presence of accentual-type labels, the use of force or predicted alignments, and acoustic features used as local condition parameters of the Wavenet vocoder. Our results reveal that although the proposed systems still do not match the quality of a top-line pipeline system for Japanese, we show important stepping stones towards end-to-end Japanese speech synthesis. Index Terms-- speech synthesis, deep learning, Tacotron 1. INTRODUCTION Tacotron [1] opened a novel path to end-to-end speech synthesis.


mozilla/TTS

#artificialintelligence

This project is a part of Mozilla Common Voice. TTS aims a Text2Speech engine low in cost and high in quality. To begin with, you can hear a sample here. The model here is highly inspired from Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model however, it has many important updates over the baseline model that make training faster and computationally very efficient. Feel free to experiment new ideas and propose changes.


Microsoft's new neural text-to-speech service helps machines speak like people

#artificialintelligence

Microsoft has reached a milestone in text-to-speech synthesis with a production system that uses deep neural networks to make the voices of computers nearly indistinguishable from recordings of people. With the human-like natural prosody and clear articulation of words, Neural TTS has significantly reduced listening fatigue when you interact with AI systems. Our team demonstrated our neural-network powered text-to-speech capability at the Microsoft Ignite conference in Orlando, Florida, this week. The capability is currently available in preview through Azure Cognitive Services Speech Services. Neural text-to-speech can be used to make interactions with chatbots and virtual assistants more natural and engaging, convert digital texts such as e-books into audiobooks and enhance in-car navigation systems.


Cloud Text-to-Speech - Speech Synthesis Cloud Text-to-Speech API Google Cloud

#artificialintelligence

Google Cloud Text-to-Speech enables developers to synthesize natural-sounding speech with 30 voices, available in multiple languages and variants. It applies DeepMind's groundbreaking research in WaveNet and Google's powerful neural networks to deliver high fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users, across many applications and devices.


Google Cloud's Text-to-Speech API moves to GA, adds new WaveNet voices

ZDNet

Google Cloud on Tuesday announced the general availability of its Cloud Text-to-Speech API, which lets developers add natural-sounding speech to their devices or applications. The API also now offers a feature to optimize the speech for specific kinds of speakers. Google has also added several new WaveNet voices to the API, opening up opportunities for natural-sounding speech in more languages and a wider variety of voices. Google first announced Text-to-Speech in March, illustrating how Google has been able to leverage technology from its acquisition of DeepMind. The AI company created WaveNet, a deep neural network for generating raw audio.


Twilio gets more than 50 new text-to-speech voices with Amazon Polly

ZDNet

Twilio is giving developers more control over their interactive voice applications with built-in support for Amazon Polly -- the AWS text-to-speech service that uses deep learning to synthesize speech. The integration adds more than 50 human-sounding voices in 25 languages to the Twilio platform, the cloud communications company announced Monday. In addition to offering access to different voices and languages, Polly will enable developers using Twilio's Programmable Voice to control variables like the volume, pitch, rate and pronunciation of the voices that interact with end users. Programmable Voice has long offered a built-in basic text-to-speech (TTS) service that supports three voices, each with their own supported set of languages. TTS capabilities, however, have improved dramatically in recent years, and Twilio notes that Amazon has been at the forefront of these improvements.


Artificial Intelligence Is Helping Blind People See In Philadelphia

#artificialintelligence

Money is one of many challenges for people who are visually impaired. Its features include recognizing different kinds of products which are then spoken into an earpiece. "Oreos cookies, it will tell me it's Oreos cookies this is how you recognize the product," said Pedro. Dr. Georgia Crozier with the Moore Eye Institute says MyEye is unlike other devices that work with magnification. This sees for the person and translates it into words.


Speech Synthesis as a Service – Towards Data Science

#artificialintelligence

With the increasing performance of text-to-speech systems, the term "robotic voice" is likely to be redefined soon. One improvement a time, we will come to think of speech synthesis as a complement and, occasionally, as a competitor to human voice-over talents and announcers. The publications describing WaveNet[1], Tacotron[2], DeepVoice[3] and other systems are important milestones on the way to passing acoustic forms of the Turing test. Training a speech synthesizer, however, can still be a time-consuming, resource-intensive and, sometimes, outright frustrating task. The issues and demos published on Github repositories focused on replicating research results are a testimony to this fact.