Language Imbalance Driven Rewarding for Multilingual Self-improving

Yang, Wen, Wu, Junhong, Wang, Chen, Zong, Chengqing, Zhang, Jiajun

arXiv.org Artificial Intelligence 

Large Language Models (LLMs) have achieved state-of-the-art performance across numerous tasks. However, these advancements have predominantly benefited "first-class" languages such as English and Chinese, leaving many other languages underrepresented. This imbalance, while limiting broader applications, generates a natural preference ranking between languages, offering an opportunity to bootstrap the multilingual capabilities of LLM in a self-improving manner. Thus, we propose Language Imbalance Driven Rewarding, where the inherent imbalance between dominant and non-dominant languages within LLMs is leveraged as a reward signal. Iterative DPO training demonstrates that this approach not only enhances LLM performance in non-dominant languages but also improves the dominant language's capacity, thereby yielding an iterative reward signal. Fine-tuning Meta-Llama-3-8B-Instruct over two iterations of this approach results in continuous improvements in multilingual performance across instruction-following and arithmetic reasoning tasks, evidenced by an average improvement of 7.46% win rate on the X-AlpacaEval leaderboard and 13.9% accuracy on the MGSM benchmark. This work serves as an initial exploration, paving the way for multilingual self-improvement of LLMs. Large Language Models (LLMs) have revolutionized the field of Natural Language Processing (NLP) with superior performance across numerous tasks. However, existing studies show that due to the imbalance of pre-training and fine-tuning data across languages, existing LLMs have predominately benefited a few "first-class" languages, particularly English and Chinese, thereby overlooking a wide range of other languages (Qin et al., 2024). Given that LLMs are used worldwide, such language imbalance presents significant risks for users who operate in less dominant languages (Deshpande et al., 2023). To this end, enhancing the multilingual performance of LLMs has gained increasing attention. Previous research predominantly frames this imbalance as an issue to be resolved, often addressing it through multilingual training and cross-lingual alignment.