What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training
Kloots, Marianne de Heer, Mohebbi, Hosein, Pouw, Charlotte, Shen, Gaofei, Zuidema, Willem, Bentum, Martijn
–arXiv.org Artificial Intelligence
How language-specific are speech representations learned by self-supervised models? Existing work has shown that a range of linguistic features can be successfully decoded from end-to-end models trained only on speech recordings. However, it's less clear to what extent pre-training on specific languages improves language-specific linguistic information. Here we test the encoding of Dutch phonetic and lexical information in internal representations of self-supervised Wav2V ec2 models. Pre-training exclusively on Dutch improves the representation of Dutch linguistic features as compared to pre-training on similar amounts of English or larger amounts of multilingual data. This language-specific advantage is well-detected by trained clustering or classification probes, and partially observable using zero-shot metrics.
arXiv.org Artificial Intelligence
Jul-11-2025
- Country:
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Ireland > Leinster
- North America > Mexico
- Mexico City > Mexico City (0.04)
- South America > Colombia
- Meta Department > Villavicencio (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.46)
- Technology: