Mind the Language Gap: Automated and Augmented Evaluation of Bias in LLMs for High- and Low-Resource Languages
Buscemi, Alessio, Lothritz, Cédric, Morales, Sergio, Gomez-Vazquez, Marcos, Clarisó, Robert, Cabot, Jordi, Castignani, German
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have exhibited impressive natural language processing capabilities but often perpetuate social biases inherent in their training data. To address this, we introduce MultiLingual Augmented Bias Testing (MLA-BiTe), a framework that improves prior bias evaluation methods by enabling systematic multilingual bias testing. MLA-BiTe leverages automated translation and paraphrasing techniques to support comprehensive assessments across diverse linguistic settings. In this study, we evaluate the effectiveness of MLA-BiTe by testing four state-of-the-art LLMs in six languages -- including two low-resource languages -- focusing on seven sensitive categories of discrimination.
arXiv.org Artificial Intelligence
Apr-29-2025
- Country:
- Africa
- Central Africa (0.04)
- South Africa (0.04)
- Asia
- India (0.04)
- Middle East > Jordan (0.04)
- Europe
- Andorra (0.04)
- Austria (0.04)
- Germany (0.04)
- Spain (0.04)
- Switzerland (0.04)
- Western Europe (0.04)
- North America > Canada (0.14)
- South America (0.04)
- Africa
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government (0.68)
- Law > Civil Rights & Constitutional Law (0.71)
- Technology: