Goto

Collaborating Authors

Why Is Speech Recognition Technology So Difficult to Perfect?

Huffington Post - Tech news and opinion

This is an excellent question to start off an automatic speech recognition (ASR) interview. I would slightly rephrase the question as "Why is speech recognition hard?" An ASR is just like any other machine learning (ML) problem, where the objective is to classify a sound wave into one of the basic units of speech (also called a "class" in ML terminology), such as a word. The problem with human speech is the huge amount of variation that occurs while pronouncing a word. For example, below are two recordings of the word "Yes" spoken by the same person (wave source: AN4 dataset [1]).


Why Isn't Voice Recognition Software More Accurate?

Forbes - Tech

This is an excellent question to start off an automatic speech recognition (ASR) interview. I would slightly rephrase the question as "Why is speech recognition hard?" An ASR is just like any other machine learning (ML) problem, where the objective is to classify a sound wave into one of the basic units of speech (also called a "class" in ML terminology), such as a word. The problem with human speech is the huge amount of variation that occurs while pronouncing a word. For example, below are two recordings of the word "Yes" spoken by the same person (wave source: AN4 dataset [1]).


Speaker identification from the sound of the human breath

arXiv.org Machine Learning

This paper examines the speaker identification potential of breath sounds in continuous speech. Speech is largely produced during exhalation. In order to replenish air in the lungs, speakers must periodically inhale. When inhalation occurs in the midst of continuous speech, it is generally through the mouth. Intra-speech breathing behavior has been the subject of much study, including the patterns, cadence, and variations in energy levels. However, an often ignored characteristic is the {\em sound} produced during the inhalation phase of this cycle. Intra-speech inhalation is rapid and energetic, performed with open mouth and glottis, effectively exposing the entire vocal tract to enable maximum intake of air. This results in vocal tract resonances evoked by turbulence that are characteristic of the speaker's speech-producing apparatus. Consequently, the sounds of inhalation are expected to carry information about the speaker's identity. Moreover, unlike other spoken sounds which are subject to active control, inhalation sounds are generally more natural and less affected by voluntary influences. The goal of this paper is to demonstrate that breath sounds are indeed bio-signatures that can be used to identify speakers. We show that these sounds by themselves can yield remarkably accurate speaker recognition with appropriate feature representations and classification frameworks.


Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems

Neural Information Processing Systems

Neural networks have become ubiquitous in automatic speech recognition systems. While neural networks are typically used as acoustic models in more complex systems, recent studies have explored end-to-end speech recognition systems based on neural networks, which can be trained to directly predict text from input acoustic features. Although such systems are conceptually elegant and simpler than traditional systems, it is less obvious how to interpret the trained models. In this work, we analyze the speech representations learned by a deep end-to-end model that is based on convolutional and recurrent layers, and trained with a connectionist temporal classification (CTC) loss. We use a pre-trained model to generate frame-level features which are given to a classifier that is trained on frame classification into phones.


Reading a Neural Network's "Mind"

#artificialintelligence

Neural models have become ubiquitous in automatic speech recognition systems. While neural networks are typically used as acoustic models in more complex systems, recent studies have explored end-to-end speech recognition systems based on neural networks, which can be trained to directly predict text from input acoustic features. Although such systems are conceptually elegant and simpler than traditional systems, it is less obvious how to interpret the trained models. In this work, we analyze the speech representations learned by a deep end-to-end model that is based on convolutional and recurrent layers, and trained with a connectionist temporal classification (CTC) loss. We use a pre-trained model to generate frame-level features which are given to a classifier that is trained on frame classification into phones.