Goto

Collaborating Authors

 caste bias


OpenAI is huge in India. Its models are steeped in caste bias.

MIT Technology Review

When Dhiraj Singha began applying for postdoctoral sociology fellowships in Bengaluru, India, in March, he wanted to make sure the English in his application was pitch-perfect. So he turned to ChatGPT. He was surprised to see that in addition to smoothing out his language, it changed his identity--swapping out his surname for "Sharma," which is associated with privileged high-caste Indians. Though his application did not mention his last name, the chatbot apparently interpreted the "s" in his email address as Sharma rather than Singha, which signals someone from the caste-oppressed Dalits. "The experience [of AI] actually mirrored society," Singha says.


DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis

Vijayaraghavan, Prashanth, Vosoughi, Soroush, Chiazor, Lamogha, Horesh, Raya, de Paula, Rogerio Abreu, Degan, Ehsan, Mukherjee, Vandana

arXiv.org Artificial Intelligence

Recent advancements in large language models (LLMs) have revolutionized natural language processing (NLP) and expanded their applications across diverse domains. However, despite their impressive capabilities, LLMs have been shown to reflect and perpetuate harmful societal biases, including those based on ethnicity, gender, and religion. A critical and underexplored issue is the reinforcement of caste-based biases, particularly towards India's marginalized caste groups such as Dalits and Shudras. In this paper, we address this gap by proposing DECASTE, a novel, multi-dimensional framework designed to detect and assess both implicit and explicit caste biases in LLMs. Our approach evaluates caste fairness across four dimensions: socio-cultural, economic, educational, and political, using a range of customized prompting strategies. By benchmarking several state-of-the-art LLMs, we reveal that these models systematically reinforce caste biases, with significant disparities observed in the treatment of oppressed versus dominant caste groups. For example, bias scores are notably elevated when comparing Dalits and Shudras with dominant caste groups, reflecting societal prejudices that persist in model outputs. These results expose the subtle yet pervasive caste biases in LLMs and emphasize the need for more comprehensive and inclusive bias evaluation methodologies that assess the potential risks of deploying such models in real-world contexts.