Enhancing LLM Language Adaption through Cross-lingual In-Context Pre-training
Wu, Linjuan, Wei, Haoran, Lin, Huan, Li, Tianhao, Yang, Baosong, Huang, Fei, Lu, Weiming
–arXiv.org Artificial Intelligence
Large language models (LLMs) exhibit remarkable multilingual capabilities despite English-dominated pre-training, attributed to cross-lingual mechanisms during pre-training. Existing methods for enhancing cross-lingual transfer remain constrained by parallel resources, suffering from limited linguistic and domain coverage. We propose Cross-lingual In-context Pre-training (CrossIC-PT), a simple and scalable approach that enhances cross-lingual transfer by leveraging semantically related bilingual texts via simple next-word prediction. We construct CrossIC-PT samples by interleaving semantic-related bilingual Wikipedia documents into a single context window. To access window size constraints, we implement a systematic segmentation policy to split long bilingual document pairs into chunks while adjusting the sliding window mechanism to preserve contextual coherence. We further extend data availability through a semantic retrieval framework to construct CrossIC-PT samples from web-crawled corpus. Experimental results demonstrate that CrossIC-PT improves multilingual performance on three models (Llama-3.1-8B, Qwen2.5-7B, and Qwen2.5-1.5B) across six target languages, yielding performance gains of 3.79%, 3.99%, and 1.95%, respectively, with additional improvements after data augmentation.
arXiv.org Artificial Intelligence
Sep-22-2025
- Country:
- Africa > Rwanda
- Asia
- China > Zhejiang Province (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Europe > Belgium
- Brussels-Capital Region > Brussels (0.04)
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- Mexico > Mexico City
- Genre:
- Research Report > New Finding (0.87)
- Industry:
- Education (0.68)
- Technology: