What's in a Name? Auditing Large Language Models for Race and Gender Bias
Haim, Amit, Salinas, Alejandro, Nyarko, Julian
–arXiv.org Artificial Intelligence
Large Language Models (LLM) have dramatically surged in popularity over the recent years. Since the release of ChatGPT, LLMs - especially those with an accessible chat interface - have not only been used by experts, but are also becoming an increasingly common tool with significant benefits for laypeople. To that end, many commercial actors have already begun implementing LLMs in their operations, ranging from customer-facing chatbots to internal decision support systems [14, 6]. The fairness of AI algorithms, including LLMs, has been a pernicious issue, motivating a growing literature and community of AI ethics research [8]. Disparities across gender and race, among other attributes, have especially preoccupied this field [4], leading to efforts to include bias auditing as an important component of AI harm mitigation in policy discussions and regulatory frameworks [28]. Mitigating biases arising from the explicit use of race or gender in the prompt is comparatively straightforward.
arXiv.org Artificial Intelligence
Feb-29-2024
- Country:
- Asia > China
- Hong Kong (0.04)
- North America > United States
- California > Santa Clara County
- Palo Alto (0.04)
- Illinois > Cook County
- Chicago (0.04)
- Michigan (0.04)
- Minnesota (0.04)
- Texas > Travis County
- Austin (0.04)
- Wisconsin (0.04)
- California > Santa Clara County
- Asia > China
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education > Educational Setting (0.68)
- Government > Regional Government
- Law > Civil Rights & Constitutional Law (1.00)
- Leisure & Entertainment > Sports (0.93)
- Technology: