Speech Synthesis


Twilio gets more than 50 new text-to-speech voices with Amazon Polly

ZDNet

Twilio is giving developers more control over their interactive voice applications with built-in support for Amazon Polly -- the AWS text-to-speech service that uses deep learning to synthesize speech. The integration adds more than 50 human-sounding voices in 25 languages to the Twilio platform, the cloud communications company announced Monday. In addition to offering access to different voices and languages, Polly will enable developers using Twilio's Programmable Voice to control variables like the volume, pitch, rate and pronunciation of the voices that interact with end users. Programmable Voice has long offered a built-in basic text-to-speech (TTS) service that supports three voices, each with their own supported set of languages. TTS capabilities, however, have improved dramatically in recent years, and Twilio notes that Amazon has been at the forefront of these improvements.


Artificial Intelligence Is Helping Blind People See In Philadelphia

#artificialintelligence

Money is one of many challenges for people who are visually impaired. Its features include recognizing different kinds of products which are then spoken into an earpiece. "Oreos cookies, it will tell me it's Oreos cookies this is how you recognize the product," said Pedro. Dr. Georgia Crozier with the Moore Eye Institute says MyEye is unlike other devices that work with magnification. This sees for the person and translates it into words.


Speech Synthesis as a Service – Towards Data Science

#artificialintelligence

With the increasing performance of text-to-speech systems, the term "robotic voice" is likely to be redefined soon. One improvement a time, we will come to think of speech synthesis as a complement and, occasionally, as a competitor to human voice-over talents and announcers. The publications describing WaveNet[1], Tacotron[2], DeepVoice[3] and other systems are important milestones on the way to passing acoustic forms of the Turing test. Training a speech synthesizer, however, can still be a time-consuming, resource-intensive and, sometimes, outright frustrating task. The issues and demos published on Github repositories focused on replicating research results are a testimony to this fact.


r/MachineLearning - [D] Best open source Text to Speech networks?

#artificialintelligence

Hey guys, I'm looking to make an application that uses neural text to speech for my Python program. I'm not sure what open source SOTA is like, would love to get some reference repositories to check out, especially if they have demos.


Speaker-independent raw waveform model for glottal excitation

arXiv.org Machine Learning

Recent speech technology research has seen a growing interest in using WaveNets as statistical vocoders, i.e., generating speech waveforms from acoustic features. These models have been shown to improve the generated speech quality over classical vocoders in many tasks, such as text-to-speech synthesis and voice conversion. Furthermore, conditioning WaveNets with acoustic features allows sharing the waveform generator model across multiple speakers without additional speaker codes. However, multi-speaker WaveNet models require large amounts of training data and computation to cover the entire acoustic space. This paper proposes leveraging the source-filter model of speech production to more effectively train a speaker-independent waveform generator with limited resources. We present a multi-speaker 'GlotNet' vocoder, which utilizes a WaveNet to generate glottal excitation waveforms, which are then used to excite the corresponding vocal tract filter to produce speech. Listening tests show that the proposed model performs favourably to a direct WaveNet vocoder trained with the same model architecture and data.


Google's new text-to-speech service has more realistic voices

Engadget

Google will now let developers use the text-to-speech synthesis that powers the voices in Google Assistant and Maps. Cloud Text-to-Speech is available now through the Google Cloud Platform and the company says it can be used to power voice response systems in call centers, enable IoT device speech and convert media like news articles and books into a spoken format. There are 32 different voice options in 12 languages and users can customize pitch, speaking rate and volume gain. Additionally, a selection of the available voices were built with Google's WaveNet model. It was developed by Google's DeepMind team and the company first announced it in 2016.


Google Cloud Platform launches text-to-speech service to compete with AWS Polly

ZDNet

Google Cloud outlined Cloud Text-to-Speech a machine learning service that uses a model by Google's Deepmind subsidiary to analyze raw audio. With the move, developers will get more access to the text to natural sounding speech technology used in Google Assistant, Search, Maps and others. According to Google, Cloud Text-to-Speech can be used to power call center voice response systems, enabling Internet of things devices to talk and converting text-based media into spoken formats. Google Cloud Text-to-Speech allows customers to choose from 32 different voices in 12 languages. Everything you need to know about the cloud, explained How to choose your cloud provider: AWS, Google or Microsoft?


Meet the man whose voice became Stephen Hawking's

Mashable

Stephen Hawking's computer-generated voice is so iconic that it's trademarked -- The filmmakers behind The Theory of Everything had to get Hawking's personal permission to use the voice in his biopic. But that voice has an interesting origin story of its own. Back in the '80s, when Hawking was first exploring text-to-speech communication options after he lost the power of speech, a pioneer in computer-generated speech algorithms was working at MIT on that very thing. His name was Dennis Klatt. As Wired uncovered, Klatt's work was incorporated into one of the first devices that translated speech into text: the DECtalk.


Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning

arXiv.org Artificial Intelligence

We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training ten times faster. We scale Deep Voice 3 to data set sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on one single-GPU server.


Google Creates A Text To Speech AI system Alike Human voice

#artificialintelligence

Google has plunged high towards its'AI-first' dream. The tech giant has attempted to develop a Text-to-speech system that has exactly human-like articulation. This AI system is called "Tacotron 2" that has the ability to give an AI-generated computer speech in a human-voice. Google researchers mentioned in the blog post that the new procedure does not utitilise complex linguistic and acoustic features as input. In place of it, they developed human-like speech from text using neural networks trained using only speech examples and corresponding text transcript.