LayAlign: Enhancing Multilingual Reasoning in Large Language Models via Layer-Wise Adaptive Fusion and Alignment Strategy
Ruan, Zhiwen, Li, Yixia, Zhu, He, Wang, Longyue, Luo, Weihua, Zhang, Kaifu, Chen, Yun, Chen, Guanhua
–arXiv.org Artificial Intelligence
Despite being pretrained on multilingual corpora, large language models (LLMs) exhibit suboptimal performance on low-resource languages. Recent approaches have leveraged multilingual encoders alongside LLMs by introducing trainable parameters connecting the two models. However, these methods typically focus on the encoder's output, overlooking valuable information from other layers. We propose \aname (\mname), a framework that integrates representations from all encoder layers, coupled with the \attaname mechanism to enable layer-wise interaction between the LLM and the multilingual encoder. Extensive experiments on multilingual reasoning tasks, along with analyses of learned representations, show that our approach consistently outperforms existing baselines.
arXiv.org Artificial Intelligence
Feb-16-2025
- Country:
- Asia (0.46)
- Europe (0.46)
- North America (0.46)
- Genre:
- Research Report > New Finding (0.93)
- Technology: