Jointly Learning Visual and Auditory Speech Representations from Raw Data

Haliassos, Alexandros, Ma, Pingchuan, Mira, Rodrigo, Petridis, Stavros, Pantic, Maja

arXiv.org Artificial Intelligence 

We present RAVEn, a self-supervised multi-modal approach to jointly learn visual and auditory speech representations. Our pre-training objective involves encoding masked inputs, and then predicting contextualised targets generated by slowly-evolving momentum encoders. Driven by the inherent differences between video and audio, our design is asymmetric w.r.t. the two modalities' pretext tasks: Whereas the auditory stream predicts both the visual and auditory targets, the visual one predicts only the auditory targets. We observe strong results in low-and high-resource labelled data settings when fine-tuning the visual and auditory encoders resulting from a single pre-training stage, in which the encoders are jointly trained. Notably, RAVEn surpasses all self-supervised methods on visual speech recognition (VSR) on LRS3, and combining RAVEn with self-training using only 30 hours of labelled data even outperforms a recent semi-supervised method trained on 90,000 hours of non-public data. At the same time, we achieve state-of-the-art results in the LRS3 low-resource setting for auditory speech recognition (as well as for VSR). Our findings point to the viability of learning powerful speech representations entirely from raw video and audio, i.e., without relying on handcrafted features. The sound of someone articulating words coincides with the sight of movements in and around their mouth. Both a recording of a speech waveform and a corresponding silent video of mouth motion provide rich - but not identical - information on which words were uttered. Despite the difficulty of interpreting lip movements compared with an audio waveform, the task of visual speech recognition (VSR; also known as lipreading) has important applications, ranging from recognising utterances in a noisy environment (Ma et al., 2021b; Afouras et al., 2018a; Martinez et al., 2020; Makino et al., 2019) and aiding people suffering from aphonia (an inability to speak), to transcribing archival silent films and detecting DeepFake videos (Haliassos et al., 2021). Auditory (also known as automatic) speech recognition (ASR) and VSR benefit greatly from the combination of high-capacity neural networks and large datasets. Rapid advances of modern hardware are enabling the use of ever-growing, data-hungry networks, but the effort required for transcription hinders the scaling of labelled data along with the models. One way to leverage unlabelled videos for VSR is to use an external ASR model for pseudo-labelling (Afouras et al., 2020; Ma et al., 2022). However, this requires a large amount of labelled data to train a strong ASR model in the first place, and supervised VSR training with long sequences often poses optimisation problems, requiring costly curriculum learning strategies (Chung et al., 2017; Ma et al., 2022) or pre-training the feature extractor with isolated words (Afouras et al., 2018a; Ma et al., 2021b).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found