POTSA: A Cross-Lingual Speech Alignment Framework for Low Resource Speech-to-Text Translation
Li, Xuanchen, Cui, Chenrui, Wang, Tianrui, Ge, Meng, Huang, Zikang, Li, Jin, Peng, Yizhou, Wang, Longbiao, Dang, Jianwu, Tashi, Nyima
–arXiv.org Artificial Intelligence
Speech Large Language Models (SpeechLLMs) have achieved breakthroughs in multilingual speech-to-text translation (S2TT). However, existing approaches often overlook semantic commonalities across source languages, leading to biased translation performance. In this work, we propose \textbf{POTSA} (Parallel Optimal Transport for Speech Alignment), a new framework based on cross-lingual parallel speech pairs and Optimal Transport (OT), designed to bridge high- and low-resource translation gaps. First, we introduce a Bias Compensation module to coarsely align initial speech representations across languages. Second, we impose token-level OT constraints on a Q-Former using parallel speech pairs to establish fine-grained consistency of representations. Then, we apply a layer scheduling strategy to focus OT constraints on the most semantically beneficial layers. Experiments on the FLEURS dataset show that our method achieves SOTA performance, with +0.93 BLEU on average over five common languages and +5.05 BLEU on zero-shot languages, using only 10 hours of parallel speech per source language.
arXiv.org Artificial Intelligence
Nov-13-2025
- Country:
- Asia
- China
- Guangdong Province (0.04)
- Tianjin Province > Tianjin (0.05)
- Tibet Autonomous Region > Lhasa (0.04)
- Singapore (0.04)
- China
- Asia
- Genre:
- Research Report (0.50)
- Technology: