Training Articulatory Inversion Models for Interspeaker Consistency
McGhee, Charles, Gales, Mark J. F., Knill, Kate M.
–arXiv.org Artificial Intelligence
Acoustic-to-Articulatory Inversion (AAI) attempts to model the inverse mapping from speech to articulation. Exact articulatory prediction from speech alone may be impossible, as speakers can choose different forms of articulation seemingly without reference to their vocal tract structure. However, once a speaker has selected an articulatory form, their productions vary minimally. Recent works in AAI have proposed adapting Self-Supervised Learning (SSL) models to single-speaker datasets, claiming that these single-speaker models provide a universal articulatory template. In this paper, we investigate whether SSL-adapted models trained on single and multi-speaker data produce articulatory targets which are consistent across speaker identities for English and Russian. We do this through the use of a novel evaluation method which extracts articulatory targets using minimal pair sets. We also present a training method which can improve interspeaker consistency using only speech data.
arXiv.org Artificial Intelligence
Jun-10-2025
- Country:
- Asia > Japan
- Honshū > Tōhoku > Iwate Prefecture > Morioka (0.04)
- Europe
- Russia > Central Federal District
- Moscow Oblast > Moscow (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.28)
- Russia > Central Federal District
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- Hawaii > Honolulu County
- Honolulu (0.04)
- Wisconsin (0.04)
- Hawaii > Honolulu County
- Canada > Quebec
- Asia > Japan
- Genre:
- Research Report (0.64)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Speech (0.96)
- Information Technology > Artificial Intelligence