Debiased Large Language Models Still Associate Muslims with Uniquely Violent Acts
Hemmatian, Babak, Varshney, Lav R.
–arXiv.org Artificial Intelligence
Recent work demonstrates a bias in the GPT-3 model towards generating violent text completions when prompted about Muslims, compared with Christians and Hindus. Two pre-registered replication attempts, one exact and one approximate, found only the weakest bias in the more recent Instruct Series version of GPT-3, fine-tuned to eliminate biased and toxic outputs. Few violent completions were observed. Additional pre-registered experiments, however, showed that using common names associated with the religions in prompts yields a highly significant increase in violent completions, also revealing a stronger second-order bias against Muslims. Names of Muslim celebrities from non-violent domains resulted in relatively fewer violent completions, suggesting that access to individualized information can steer the model away from using stereotypes. Nonetheless, content analysis revealed religion-specific violent themes containing highly offensive ideas regardless of prompt format. Our results show the need for additional debiasing of large language models to address higher-order schemas and associations.
arXiv.org Artificial Intelligence
Aug-10-2022
- Country:
- Africa > Ghana (0.04)
- Asia
- Middle East > Palestine
- Gaza Strip > Gaza Governorate > Gaza (0.05)
- Pakistan > Punjab
- Lahore Division > Lahore (0.04)
- Middle East > Palestine
- North America > United States
- California > Los Angeles County
- Pico Rivera (0.04)
- Illinois > Champaign County
- Urbana (0.05)
- New Mexico (0.04)
- California > Los Angeles County
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.70)
- Media > News (0.47)
- Technology: