Leveraging Allophony in Self-Supervised Speech Models for Atypical Pronunciation Assessment
Choi, Kwanghee, Yeo, Eunjung, Chang, Kalvin, Watanabe, Shinji, Mortensen, David
–arXiv.org Artificial Intelligence
Allophony refers to the variation in the phonetic realization of a phoneme based on its phonetic environment. Modeling allophones is crucial for atypical pronunciation assessment, which involves distinguishing atypical from typical pronunciations. However, recent phoneme classifier-based approaches often simplify this by treating various realizations as a single phoneme, bypassing the complexity of modeling allophonic variation. Motivated by the acoustic modeling capabilities of frozen self-supervised speech model (S3M) features, we propose MixGoP, a novel approach that leverages Gaussian mixture models to model phoneme distributions with multiple subclusters. Our experiments show that MixGoP achieves state-of-the-art performance across four out of five datasets, including dysarthric and non-native speech. Our analysis further suggests that S3M features capture allophonic variation more effectively than MFCCs and Mel spectrograms, highlighting the benefits of integrating MixGoP with S3M features.
arXiv.org Artificial Intelligence
Feb-10-2025
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Statistical Learning (1.00)
- Natural Language (0.68)
- Speech (0.88)
- Data Science > Data Mining (0.67)
- Artificial Intelligence
- Information Technology