Speech Synthesis


mozilla/TTS

#artificialintelligence

This project is a part of Mozilla Common Voice. TTS aims a Text2Speech engine low in cost and high in quality. To begin with, you can hear a sample here. The model here is highly inspired from Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model however, it has many important updates over the baseline model that make training faster and computationally very efficient. Feel free to experiment new ideas and propose changes.


Microsoft's new neural text-to-speech service helps machines speak like people

#artificialintelligence

Microsoft has reached a milestone in text-to-speech synthesis with a production system that uses deep neural networks to make the voices of computers nearly indistinguishable from recordings of people. With the human-like natural prosody and clear articulation of words, Neural TTS has significantly reduced listening fatigue when you interact with AI systems. Our team demonstrated our neural-network powered text-to-speech capability at the Microsoft Ignite conference in Orlando, Florida, this week. The capability is currently available in preview through Azure Cognitive Services Speech Services. Neural text-to-speech can be used to make interactions with chatbots and virtual assistants more natural and engaging, convert digital texts such as e-books into audiobooks and enhance in-car navigation systems.


Cloud Text-to-Speech - Speech Synthesis Cloud Text-to-Speech API Google Cloud

#artificialintelligence

Google Cloud Text-to-Speech enables developers to synthesize natural-sounding speech with 30 voices, available in multiple languages and variants. It applies DeepMind's groundbreaking research in WaveNet and Google's powerful neural networks to deliver high fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users, across many applications and devices.


Google Cloud's Text-to-Speech API moves to GA, adds new WaveNet voices

ZDNet

Google Cloud on Tuesday announced the general availability of its Cloud Text-to-Speech API, which lets developers add natural-sounding speech to their devices or applications. The API also now offers a feature to optimize the speech for specific kinds of speakers. Google has also added several new WaveNet voices to the API, opening up opportunities for natural-sounding speech in more languages and a wider variety of voices. Google first announced Text-to-Speech in March, illustrating how Google has been able to leverage technology from its acquisition of DeepMind. The AI company created WaveNet, a deep neural network for generating raw audio.


Twilio gets more than 50 new text-to-speech voices with Amazon Polly

ZDNet

Twilio is giving developers more control over their interactive voice applications with built-in support for Amazon Polly -- the AWS text-to-speech service that uses deep learning to synthesize speech. The integration adds more than 50 human-sounding voices in 25 languages to the Twilio platform, the cloud communications company announced Monday. In addition to offering access to different voices and languages, Polly will enable developers using Twilio's Programmable Voice to control variables like the volume, pitch, rate and pronunciation of the voices that interact with end users. Programmable Voice has long offered a built-in basic text-to-speech (TTS) service that supports three voices, each with their own supported set of languages. TTS capabilities, however, have improved dramatically in recent years, and Twilio notes that Amazon has been at the forefront of these improvements.


Artificial Intelligence Is Helping Blind People See In Philadelphia

#artificialintelligence

Money is one of many challenges for people who are visually impaired. Its features include recognizing different kinds of products which are then spoken into an earpiece. "Oreos cookies, it will tell me it's Oreos cookies this is how you recognize the product," said Pedro. Dr. Georgia Crozier with the Moore Eye Institute says MyEye is unlike other devices that work with magnification. This sees for the person and translates it into words.


Speech Synthesis as a Service – Towards Data Science

#artificialintelligence

With the increasing performance of text-to-speech systems, the term "robotic voice" is likely to be redefined soon. One improvement a time, we will come to think of speech synthesis as a complement and, occasionally, as a competitor to human voice-over talents and announcers. The publications describing WaveNet[1], Tacotron[2], DeepVoice[3] and other systems are important milestones on the way to passing acoustic forms of the Turing test. Training a speech synthesizer, however, can still be a time-consuming, resource-intensive and, sometimes, outright frustrating task. The issues and demos published on Github repositories focused on replicating research results are a testimony to this fact.


r/MachineLearning - [D] Best open source Text to Speech networks?

#artificialintelligence

Hey guys, I'm looking to make an application that uses neural text to speech for my Python program. I'm not sure what open source SOTA is like, would love to get some reference repositories to check out, especially if they have demos.


Speaker-independent raw waveform model for glottal excitation

arXiv.org Machine Learning

Recent speech technology research has seen a growing interest in using WaveNets as statistical vocoders, i.e., generating speech waveforms from acoustic features. These models have been shown to improve the generated speech quality over classical vocoders in many tasks, such as text-to-speech synthesis and voice conversion. Furthermore, conditioning WaveNets with acoustic features allows sharing the waveform generator model across multiple speakers without additional speaker codes. However, multi-speaker WaveNet models require large amounts of training data and computation to cover the entire acoustic space. This paper proposes leveraging the source-filter model of speech production to more effectively train a speaker-independent waveform generator with limited resources. We present a multi-speaker 'GlotNet' vocoder, which utilizes a WaveNet to generate glottal excitation waveforms, which are then used to excite the corresponding vocal tract filter to produce speech. Listening tests show that the proposed model performs favourably to a direct WaveNet vocoder trained with the same model architecture and data.


Google's new text-to-speech service has more realistic voices

Engadget

Google will now let developers use the text-to-speech synthesis that powers the voices in Google Assistant and Maps. Cloud Text-to-Speech is available now through the Google Cloud Platform and the company says it can be used to power voice response systems in call centers, enable IoT device speech and convert media like news articles and books into a spoken format. There are 32 different voice options in 12 languages and users can customize pitch, speaking rate and volume gain. Additionally, a selection of the available voices were built with Google's WaveNet model. It was developed by Google's DeepMind team and the company first announced it in 2016.