Code-Switching Curriculum Learning for Multilingual Transfer in LLMs
Yoo, Haneul, Park, Cheonbok, Yun, Sangdoo, Oh, Alice, Lee, Hwaran
–arXiv.org Artificial Intelligence
Large language models (LLMs) now exhibit near human-level performance in various tasks, but their performance drops drastically after a handful of high-resource languages due to the imbalance in pre-training data. Inspired by the human process of second language acquisition, particularly code-switching (the practice of language alternation in a conversation), we propose code-switching curriculum learning (CSCL) to enhance cross-lingual transfer for LLMs. CSCL mimics the stages of human language learning by progressively training models with a curriculum consisting of 1) token-level code-switching, 2) sentence-level code-switching, and 3) monolingual corpora. Using Qwen 2 as our underlying model, we demonstrate the efficacy of the CSCL in improving language transfer to Korean, achieving significant performance gains compared to monolingual continual pre-training methods. Ablation studies reveal that both token- and sentence-level code-switching significantly enhance cross-lingual transfer and that curriculum learning amplifies these effects. We also extend our findings into various languages, including Japanese (high-resource) and Indonesian (low-resource), and using two additional models (Gemma 2 and Phi 3.5). We further show that CSCL mitigates spurious correlations between language resources and safety alignment, presenting a robust, efficient framework for more equitable language transfer in LLMs. We observe that CSCL is effective for low-resource settings where high-quality, monolingual corpora for language transfer are hardly available.
arXiv.org Artificial Intelligence
Nov-4-2024
- Country:
- Asia > Middle East (0.67)
- Europe (1.00)
- North America > United States (0.68)
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Education > Curriculum > Subject-Specific Education (0.34)
- Technology: