Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield Anti-stereotypical Writing
Baumler, Connor, Daumé, Hal III
–arXiv.org Artificial Intelligence
AI-based systems such as language models can replicate and amplify social biases reflected in their training data. Among other questionable behavior, this can lead to LM-generated text--and text suggestions--that contain normatively inappropriate stereotypical associations. In this paper, we consider the question of how "debiasing" a language model impacts stories that people write using that language model in a predictive text scenario. We find that (n=414), in certain scenarios, language model suggestions that align with common social stereotypes are more likely to be accepted by human authors. Conversely, although anti-stereotypical language model suggestions sometimes lead to an increased rate of anti-stereotypical stories, this influence is far from sufficient to lead to "fully debiased" stories.
arXiv.org Artificial Intelligence
Sep-30-2024
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe (1.00)
- North America > United States
- Maryland > Prince George's County
- College Park (0.14)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Maryland > Prince George's County
- Asia > Middle East
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Banking & Finance > Real Estate (0.46)
- Education (1.00)
- Health & Medicine (1.00)
- Leisure & Entertainment (0.67)
- Media (0.92)
- Technology: