Muslim-Violence Bias Persists in Debiased GPT Models
Hemmatian, Babak, Baltaji, Razan, Varshney, Lav R.
–arXiv.org Artificial Intelligence
Abid et al. (2021) showed a tendency in GPT-3 to generate mostly violent completions when prompted about Muslims, compared with other religions. Two pre-registered replication attempts found few violent completions and only a weak anti-Muslim bias in the more recent InstructGPT, fine-tuned to eliminate biased and toxic outputs. However, more pre-registered experiments showed that using common names associated with the religions in prompts increases several-fold the rate of violent completions, revealing a significant second-order anti-Muslim bias. ChatGPT showed a bias many times stronger regardless of prompt format, suggesting that the effects of debiasing were reduced with continued model development. Our content analysis revealed religion-specific themes containing offensive stereotypes across all experiments. Our results show the need for continual de-biasing of models in ways that address both explicit and higher-order associations.
arXiv.org Artificial Intelligence
Dec-9-2023
- Country:
- North America > United States > Illinois (0.17)
- Genre:
- Research Report > New Finding (0.90)
- Industry:
- Law Enforcement & Public Safety > Terrorism (0.57)
- Technology: