Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models

Bandarkar, Lucas, Muller, Benjamin, Yuvraj, Pritish, Hou, Rui, Singhal, Nayan, Lv, Hongjiang, Liu, Bing

arXiv.org Artificial Intelligence 

Model merging, such as model souping, is the practice of combining different models with the same architecture together without further training. In this work, we present a model merging methodology that addresses the difficulty of finetuning Large Language Models (LLMs) for target tasks in non-English languages, where task-specific data is often unavailable. We focus on mathematical reasoning and without in-language math data, facilitate cross-lingual transfer by composing language and math capabilities. We then replace the top and bottom transformer layers of the math expert directly with layers from the language expert, which consequently enhances math performance in the target language. The resulting merged models outperform the individual experts and other merging methods on the math benchmark, MGSM, by 10% across four major languages where math instruction data is scarce. In addition, this layer swapping is simple, inexpensive, and intuitive, as it is based on an interpretative analysis of the most important parameter changes during the fine-tuning of each expert. The ability to successfully re-compose LLMs for cross-lingual transfer in this manner opens up future possibilities to combine model expertise, create modular solutions, and transfer reasoning capabilities across languages all post hoc. Instruction fine-tuning Large Language Models (LLMs) is necessary to customize pre-trained models for real-world applications. This fine-tuning is especially critical in multilingual settings because most popular open-source LLMs have been pretrained on highly English-centric data (Llama et al., 2024; Yang et al., 2024a; Jiang et al., 2023). Furthermore, the scarcity of high-quality labeled data available for post-training in non-English languages and the tremendous cost of procuring it further exacerbates the inequality.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found