Tewfik, Ahmed H
Modality Dropout for Multimodal Device Directed Speech Detection using Verbal and Non-Verbal Features
Krishna, Gautam, Dharur, Sameer, Rudovic, Oggi, Dighe, Pranay, Adya, Saurabh, Abdelaziz, Ahmed Hussen, Tewfik, Ahmed H
Device-directed speech detection (DDSD) is the binary classification task of distinguishing between queries directed at a voice assistant versus side conversation or background speech. State-of-the-art DDSD systems use verbal cues, e.g acoustic, text and/or automatic speech recognition system (ASR) features, to classify speech as device-directed or otherwise, and often have to contend with one or more of these modalities being unavailable when deployed in real-world settings. In this paper, we investigate fusion schemes for DDSD systems that can be made more robust to missing modalities. Concurrently, we study the use of non-verbal cues, specifically prosody features, in addition to verbal cues for DDSD. We present different approaches to combine scores and embeddings from prosody with the corresponding verbal cues, finding that prosody improves DDSD performance by upto 8.5% in terms of false acceptance rate (FA) at a given fixed operating point via non-linear intermediate fusion, while our use of modality dropout techniques improves the performance of these models by 7.4% in terms of FA when evaluated with missing modalities during inference time.
Spoken Speech Enhancement using EEG
Krishna, Gautam, Han, Yan, Tran, Co, Carnahan, Mason, Tewfik, Ahmed H
SPOKEN SPEECH ENHANCEMENT USING EEG Gautam Krishna null Y an Han null Co Tran Mason Carnahan Ahmed H T ewfik Brain Machine Interface Lab, The University of Texas at Austin ABSTRACT In this paper we demonstrate spoken speech enhancement using electroencephalography (EEG) signals using a generative adversarial network (GAN) based model and Long short-term Memory (LSTM) regression based model. Our results demonstrate that EEG features can be used to clean speech recorded in presence of background noise. Index T erms -- electroencephalograpgy (EEG), speech enhancement, deep learning 1. INTRODUCTION Speech enhancement is the process of improving the quality of speech whose quality was degraded due to additive noise. Speech enhancement is a critical preprocessing method used to improve the performance of automatic speech recognition (ASR) systems operating in presence of background noise. Noisy speech is first fed into a speech enhancement system to produce enhanced speech which is then fed into the ASR model.
Advancing Speech Recognition With No Speech Or With Noisy Speech
Krishna, Gautam, Tran, Co, Carnahan, Mason, Tewfik, Ahmed H
In this paper we demonstrate end to end continuous speech recognition (CSR) using electroencephalography (EEG) signals with no speech signal as input. An attention model based automatic speech recognition (ASR) and connectionist temporal classification (CTC) based ASR systems were implemented for performing recognition. We further demonstrate CSR for noisy speech by fusing with EEG features.
Robust End to End Speaker Verification Using EEG
Han, Yan, Krishna, Gautam, Tran, Co, Carnahan, Mason, Tewfik, Ahmed H
In this paper we demonstrate that performance of a speaker verification system can be improved by concatenating electroencephalography (EEG) signal features with speech signal. We use state of art end to end deep learning model for performing speaker verification and we demonstrate our results for noisy speech. Our results indicate that EEG signals can improve the robustness of speaker verification systems.
Speech Recognition With No Speech Or With Noisy Speech Beyond English
Krishna, Gautam, Tran, Co, Han, Yan, Carnahan, Mason, Tewfik, Ahmed H
In this paper we demonstrate continuous noisy speech recognition using connectionist temporal classification (CTC) model on limited Chinese vocabulary using electroencephalography (EEG) features with no speech signal as input and we further demonstrate single CTC model based continuous noisy speech recognition on limited joint English and Chinese vocabulary using EEG features with no speech signal as input.
Speech Recognition with no speech or with noisy speech
Krishna, Gautam, Tran, Co, Yu, Jianguo, Tewfik, Ahmed H
The performance of automatic speech recognition systems(ASR) degrades in the presence of noisy speech. This paper demonstrates that using electroencephalography (EEG) can help automatic speech recognition systems overcome performance loss in the presence of noise. The paper also shows that distillation training of automatic speech recognition systems using EEG features will increase their performance. Finally, we demonstrate the ability to recognize words from EEG with no speech signal on a limited English vocabulary with high accuracy.