What's in a Name? Auditing Large Language Models for Race and Gender Bias

Haim, Amit, Salinas, Alejandro, Nyarko, Julian

arXiv.org Artificial Intelligence 

Large Language Models (LLM) have dramatically surged in popularity over the recent years. Since the release of ChatGPT, LLMs - especially those with an accessible chat interface - have not only been used by experts, but are also becoming an increasingly common tool with significant benefits for laypeople. To that end, many commercial actors have already begun implementing LLMs in their operations, ranging from customer-facing chatbots to internal decision support systems [14, 6]. The fairness of AI algorithms, including LLMs, has been a pernicious issue, motivating a growing literature and community of AI ethics research [8]. Disparities across gender and race, among other attributes, have especially preoccupied this field [4], leading to efforts to include bias auditing as an important component of AI harm mitigation in policy discussions and regulatory frameworks [28]. Mitigating biases arising from the explicit use of race or gender in the prompt is comparatively straightforward.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found