Factorized RVQ-GAN For Disentangled Speech Tokenization
Khurana, Sameer, Klement, Dominik, Laurent, Antoine, Bobos, Dominik, Novosad, Juraj, Gazdik, Peter, Zhang, Ellen, Huang, Zili, Hussein, Amir, Marxer, Ricard, Masuyama, Yoshiki, Aihara, Ryo, Hori, Chiori, Germain, Francois G., Wichern, Gordon, Roux, Jonathan Le
–arXiv.org Artificial Intelligence
We propose Hierarchical Audio Codec (HAC), a unified neural speech codec that factorizes its bottleneck into three linguistic levels-acoustic, phonetic, and lexical-within a single model. HAC leverages two knowledge distillation objectives: one from a pre-trained speech encoder (HuBERT) for phoneme-level structure, and another from a text-based encoder (LaBSE) for lexical cues. Experiments on English and multilingual data show that HAC's factorized bottleneck yields disentangled token sets: one aligns with phonemes, while another captures word-level semantics. Quantitative evaluations confirm that HAC tokens preserve naturalness and provide interpretable linguistic information, outperforming single-level baselines in both disentanglement and reconstruction quality. These findings underscore HAC's potential as a unified discrete speech representation, bridging acoustic detail and lexical meaning for downstream speech generation and understanding tasks.
arXiv.org Artificial Intelligence
Jun-19-2025
- Country:
- Europe
- Czechia > South Moravian Region
- Brno (0.04)
- France > Provence-Alpes-Côte d'Azur
- Bouches-du-Rhône > Marseille (0.04)
- Czechia > South Moravian Region
- North America
- Canada > Quebec
- Montreal (0.04)
- United States > Massachusetts
- Middlesex County > Cambridge (0.04)
- Canada > Quebec
- Europe
- Genre:
- Research Report > New Finding (0.34)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Speech > Speech Recognition (0.95)
- Information Technology > Artificial Intelligence