Dynamic Acoustic Model Architecture Optimization in Training for ASR
Xu, Jingjing, Yang, Zijian, Zeyer, Albert, Beck, Eugen, Schlueter, Ralf, Ney, Hermann
–arXiv.org Artificial Intelligence
Architecture design is inherently complex. Existing approaches rely on either handcrafted rules, which demand extensive empirical expertise, or automated methods like neural architecture search, which are computationally intensive. In this paper, we introduce DMAO, an architecture optimization framework that employs a grow-and-drop strategy to automatically reallocate parameters during training. This reallocation shifts resources from less-utilized areas to those parts of the model where they are most beneficial. Notably, DMAO only introduces negligible training overhead at a given model complexity. We evaluate DMAO through experiments with CTC on LibriSpeech, TED-LIUM-v2 and Switchboard datasets. The results show that, using the same amount of training resources, our proposed DMAO consistently improves WER by up to 6% relatively across various architectures, model sizes, and datasets. Furthermore, we analyze the pattern of parameter redistribution and uncover insightful findings.
arXiv.org Artificial Intelligence
Jun-19-2025
- Country:
- Asia
- China
- Guangdong Province > Shenzhen (0.04)
- Shanghai > Shanghai (0.04)
- Middle East > Qatar
- South Korea > Incheon
- Incheon (0.05)
- China
- Europe
- Austria (0.04)
- Czechia > South Moravian Region
- Brno (0.04)
- Germany (0.04)
- Greece (0.04)
- Iceland > Capital Region
- Reykjavik (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Italy > Tuscany
- Florence (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- North America
- Oceania > Australia
- Queensland > Brisbane (0.04)
- Victoria > Melbourne (0.04)
- South America > Colombia
- Bolivar Department > Cartagena (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.48)
- Technology: