XL-Instruct: Synthetic Data for Cross-Lingual Open-Ended Generation
Iyer, Vivek, Rei, Ricardo, Chen, Pinzhen, Birch, Alexandra
–arXiv.org Artificial Intelligence
Cross-lingual open-ended generation -- i.e. generating responses in a desired language different from that of the user's query -- is an important yet understudied problem. We introduce XL-AlpacaEval, a new benchmark for evaluating cross-lingual generation capabilities in Large Language Models (LLMs), and propose XL-Instruct, a high-quality synthetic data generation method. Fine-tuning with just 8K XL-Instruct-generated instructions significantly improves model performance, increasing the win rate against GPT-4o-Mini from 7.4% to 21.5%, and improving on several fine-grained quality metrics. Additionally, models fine-tuned on XL-Instruct exhibit strong zero-shot transfer to both English-only and multilingual generation tasks. Given its consistent gains across the board, we strongly recommend incorporating XL-Instruct in the post-training pipeline of future multilingual LLMs. To facilitate further research, we will publicly and freely release the XL-Instruct and XL-AlpacaEval datasets, which constitute two of the few cross-lingual resources currently available in the literature.
arXiv.org Artificial Intelligence
Mar-29-2025
- Country:
- Asia > Middle East (0.28)
- Europe (0.93)
- North America > Mexico (0.28)
- Genre:
- Research Report (0.64)
- Technology: