Visualization and Interpretation of Latent Spaces for Controlling Expressive Speech Synthesis through Audio Analysis

arXiv.org Artificial Intelligence

The field of Text-to-Speech has experienced huge improvements last years benefiting from deep learning techniques. Producing realistic speech becomes possible now. As a consequence, the research on the control of the expressiveness, allowing to generate speech in different styles or manners, has attracted increasing attention lately. Systems able to control style have been developed and show impressive results. However the control parameters often consist of latent variables and remain complex to interpret. In this paper, we analyze and compare different latent spaces and obtain an interpretation of their influence on expressive speech. This will enable the possibility to build controllable speech synthesis systems with an understandable behaviour.


6 Ways Speech Synthesis Is Being Powered By Deep Learning

#artificialintelligence

This model was open sourced back in June 2019 as an implementation of the paper Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis. This service is being offered by Resemble.ai. With this product, one can clone any voice and create dynamic, iterable, and unique voice content. Users input a short voice sample and the model -- trained only during playback time -- can immediately deliver text-to-speech utterances in the style of the sampled voice. Bengaluru's Deepsync offers an Augmented Intelligence that learns the way you speak.


Transition of Siri's Voice From Robotic to Human: Note the Difference - DZone AI

#artificialintelligence

Being an iOS user, how many times do you talk to Siri in a day? If you are a keen observer, then you know that Siri's voice sounds much more like a human in iOS 11 than it has before. This is because Apple is digging deeper into the technology of artificial intelligence, machine learning, and deep learning to offer the best personal assistant experience to its users. From the introduction of Siri with the iPhone 4S to its continuation in iOS 11, this personal assistant has evolved to get closer to humans and establish good relations with them. To reply to voice commands of users, Siri uses speech synthesis combined with deep learning.


Speech Synthesis Research Engineer ObEN, Inc.

#artificialintelligence

STAGE 1: Phone Interview STAGE 2: In-person Interview at Idealab (we cover travel expenses for the day) STAGE 3: We require a sample project submission and a candidate proposal submission(To know more about what an ObEN candidate proposal is, click here) STAGE 4: Spend a day at our office and participate in all team activities.


Waveform generation for text-to-speech synthesis using pitch-synchronous multi-scale generative adversarial networks

arXiv.org Machine Learning

The state-of-the-art in text-to-speech synthesis has recently improved considerably due to novel neural waveform generation methods, such as WaveNet. However, these methods suffer from their slow sequential inference process, while their parallel versions are difficult to train and even more expensive computationally. Meanwhile, generative adversarial networks (GANs) have achieved impressive results in image generation and are making their way into audio applications; parallel inference is among their lucrative properties. By adopting recent advances in GAN training techniques, this investigation studies waveform generation for TTS in two domains (speech signal and glottal excitation). Listening test results show that while direct waveform generation with GAN is still far behind WaveNet, a GAN-based glottal excitation model can achieve quality and voice similarity on par with a WaveNet vocoder.