Soundwave: Less is More for Speech-Text Alignment in LLMs
Zhang, Yuhao, Liu, Zhiheng, Bu, Fan, Zhang, Ruiyu, Wang, Benyou, Li, Haizhou
–arXiv.org Artificial Intelligence
Existing end-to-end speech large language models (LLMs) usually rely on large-scale annotated data for training, while data-efficient training has not been discussed in depth. We focus on two fundamental problems between speech and text: the representation space gap and sequence length inconsistency. We propose Soundwave, which utilizes an efficient training strategy and a novel architecture to address these issues. Results show that Soundwave outperforms the advanced Qwen2-Audio in speech translation and AIR-Bench speech tasks, using only one-fiftieth of the training data. Further analysis shows that Soundwave still retains its intelligence during conversation. The project is available at https://github.com/FreedomIntelligence/Soundwave.
arXiv.org Artificial Intelligence
Feb-18-2025
- Country:
- Asia > China (0.46)
- Europe (1.00)
- North America > United States (0.28)
- Oceania > Australia (0.28)
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: