Attention-based Audio-Visual Fusion for Robust Automatic Speech Recognition Machine Learning

Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes beyond simple feature concatenation and learns to automatically align the two modalities, leading to enhanced representations which increase the recognition accuracy in both clean and noisy conditions. We test our strategy on the TCD-TIMIT and LRS2 datasets, designed for large vocabulary continuous speech recognition, applying three types of noise at different power ratios. We also exploit state of the art Sequence-to-Sequence architectures, showing that our method can be easily integrated. Results show relative improvements from 7% up to 30% on TCD-TIMIT over the acoustic modality alone, depending on the acoustic noise level. We anticipate that the fusion strategy can easily generalise to many other multimodal tasks which involve correlated modalities.

Scalable Factorized Hierarchical Variational Autoencoder Training Machine Learning

Deep generative models have achieved great success in unsupervised learning with the ability to capture complex nonlinear relationships between latent generating factors and observations. Among them, a factorized hierarchical variational autoencoder (FHVAE) is a variational inference-based model that formulates a hierarchical generative process for sequential data. Specifically, an FHVAE model can learn disentangled and interpretable representations, which have been proven useful for numerous speech applications, such as speaker verification, robust speech recognition, and voice conversion. However, as we will elaborate in this paper, the training algorithm proposed in the original paper is not scalable to datasets of thousands of hours, which makes this model less applicable on a larger scale. After identifying limitations in terms of runtime, memory, and hyperparameter optimization, we propose a hierarchical sampling training algorithm to address all three issues. Our proposed method is evaluated comprehensively on a wide variety of datasets, ranging from 3 to 1,000 hours and involving different types of generating factors, such as recording conditions and noise types. In addition, we also present a new visualization method for qualitatively evaluating the performance with respect to interpretability and disentanglement. Models trained with our proposed algorithm demonstrate the desired characteristics on all the datasets.

US Playing Catch Up On Artificial Intelligence As Biggest Rival Seizes Top Spot


Artificial intelligence, or "AI," is the current direction of technological innovation, but if the U.S. does not step up its game, another world power may lead the future. The number of journal articles mentioning "deep learning" or "deep neural networks" produced by China exceeds the number published about similar technological advances by the U.S. The same is true for cited publications, reveals a National Science and Technology Council (NSTC) report. The report's findings indicate that not only is China doing more AI research, but they are having a greater influence on this growing field. Chinese venture capitalists and tech giants like Baidu, a company commonly referred to as China's Google, is investing heavily in AI. For Baidu, major projects include driverless vehicles, search engine optimization, and improved speech recognition software.

Microsoft has built a machine that's as good as humans at recognizing speech


One by one, the skills that separate us from machines are falling into the machines' column. First there was chess, then Jeopardy!, then Go, then object recognition, face recognition, and video gaming in general. You could be forgiven for thinking that humans are becoming obsolete. But try any voice recognition software and your faith in humanity will be quickly restored. Though good and getting better, these systems are by no means perfect.

Are Microsoft And VocalZoom The Peanut Butter And Chocolate Of Voice Recognition?


Moore's law has driven silicon chip circuitry to the point where we are surrounded by devices equipped with microprocessors. The devices are frequently wonderful; communicating with them – not so much. Pressing buttons on smart devices or keyboards is often clumsy and never the method of choice when effective voice communication is possible. The keyword in the previous sentence is "effective". Technology has advanced to the point where we are in the early stages of being able to communicate with our devices using voice recognition.