Mind the Gap... or Not? How Translation Errors and Evaluation Details Skew Multilingual Results
Peter, Jan-Thorsten, Vilar, David, Domhan, Tobias, Malkin, Dan, Freitag, Markus
–arXiv.org Artificial Intelligence
In addition they have also shown impressive capabilities in different domains, like coding, science and math. In this short paper, taking math as an example domain, we study the performance of different LLMs across languages. Experimental results show that there exists a non-negligible and consistent gap in the performance of the models across languages. Interestingly, and somewhat against expectations, the gap exists for both high-and low-resource languages. We hope that these results influence further research into cross-lingual capability generalization for next generation LLMs. If it weren't for the fact that they are false! By analyzing one of the standard multilingual math benchmarks (MGSM), we determine that several translation errors are present in the data. Furthermore, the lack of standardized answer extraction from LLM outputs further influences the final results. We propose a method for automatic quality assurance to address the first issue at scale, and give recommendations to address the second one. Combining these two approaches we show that the aforementioned language gap mostly disappears, leading to completely different conclusions from our research. In recent years, large language models' capabilities have expanded in two primary directions: broader language coverage and enhanced performance on complex tasks. On the language dimension, it is now usual for LLMs to support not only high-resource languages languages (e.g. This is a very important and welcome progress direction in order to improve the inclusivity of AI applications and research.
arXiv.org Artificial Intelligence
Nov-10-2025
- Country:
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- Genre:
- Research Report > New Finding (0.87)
- Technology: