More of the Same: Persistent Representational Harms Under Increased Representation
Mickel, Jennifer, De-Arteaga, Maria, Liu, Leqi, Tian, Kevin
–arXiv.org Artificial Intelligence
To recognize and mitigate the harms of generative AI systems, it is crucial to consider who is represented in the outputs of generative AI systems and how people are represented. A critical gap emerges when naively improving who is represented, as this does not imply bias mitigation efforts have been applied to address how people are represented. We critically examined this by investigating gender representation in occupation across state-of-the-art large language models. We first show evidence suggesting that over time there have been interventions to models altering the resulting gender distribution, and we find that women are more represented than men when models are prompted to generate biographies or personas. We then demonstrate that representational biases persist in how different genders are represented by examining statistically significant word differences across genders. This results in a proliferation of representational harms, stereotypes, and neoliberalism ideals that, despite existing interventions to increase female representation, reinforce existing systems of oppression.
arXiv.org Artificial Intelligence
Feb-28-2025
- Country:
- North America > United States > Texas > Travis County > Austin (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education > Educational Setting
- Higher Education (0.46)
- Government > Regional Government
- Health & Medicine > Therapeutic Area (0.68)
- Law (1.00)
- Education > Educational Setting
- Technology: