Benchmarking Training Paradigms, Dataset Composition, and Model Scaling for Child ASR in ESPnet
Ying, Anyu, Shankar, Natarajan Balaji, Lin, Chyi-Jiunn, Shi, Mohan, Wang, Pu, Shim, Hye-jin, Arora, Siddhant, Van hamme, Hugo, Alwan, Abeer, Watanabe, Shinji
–arXiv.org Artificial Intelligence
Despite advancements in ASR, child speech recognition remains challenging due to acoustic variability and limited annotated data. While fine-tuning adult ASR models on child speech is common, comparisons with flat-start training remain underexplored. We compare flat-start training across multiple datasets, SSL representations (WavLM, XEUS), and decoder architectures. Our results show that SSL representations are biased toward adult speech, with flat-start training on child speech mitigating these biases. We also analyze model scaling, finding consistent improvements up to 1B parameters, beyond which performance plateaus. Additionally, age-related ASR and speaker verification analysis highlights the limitations of proprietary models like Whisper, emphasizing the need for open-data models for reliable child speech research. All investigations are conducted using ESPnet, and our publicly available benchmark provides insights into training strategies for robust child speech processing.
arXiv.org Artificial Intelligence
Aug-25-2025
- Country:
- Asia > South Korea
- Europe > Belgium
- Flanders > Flemish Brabant > Leuven (0.04)
- North America > United States
- California > Los Angeles County
- Los Angeles (0.14)
- Pennsylvania > Allegheny County
- Pittsburgh (0.05)
- California > Los Angeles County
- Genre:
- Research Report > New Finding (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Speech > Speech Recognition (1.00)
- Information Technology > Artificial Intelligence