Codec2Vec: Self-Supervised Speech Representation Learning Using Neural Speech Codecs

Tseng, Wei-Cheng, Harwath, David

arXiv.org Artificial Intelligence 

Abstract--Recent advancements in neural audio codecs have not only enabled superior audio compression but also enhanced speech synthesis techniques. Researchers are now exploring their potential as universal acoustic feature extractors for a broader range of speech processing tasks. Building on this trend, we introduce Codec2V ec, the first speech representation learning framework that relies exclusively on discrete audio codec units. This approach offers several advantages, including improved data storage and transmission efficiency, faster training, and enhanced data privacy. We explore masked prediction with various training target derivation strategies to thoroughly understand the effectiveness of this framework. Evaluated on the SUPERB benchmark, Codec2V ec achieves competitive performance compared to continuous-input models while reducing storage requirements by up to 16.5 and training time by 2.3, showcasing its scalability and efficiency. Over the past several years, the speech processing community has rapidly adopted self-supervised learning (SSL) followed by supervised fine-tuning as a general-purpose modeling approach for tasks ranging from automatic speech recognition and emotion recognition to speaker verification [1]- [3].