How User Language Affects Conflict Fatality Estimates in ChatGPT
Kazenwadel, Daniel, Steinert, Christoph V.
–arXiv.org Artificial Intelligence
OpenAI's ChatGPT language model has gained popularity as a powerful tool for complex problem-solving and information retrieval. However, concerns arise about the reproduction of biases present in the language-specific training data. In this study, we address this issue in the context of the Israeli-Palestinian and Turkish-Kurdish conflicts. Using GPT-3.5, we employed an automated query procedure to inquire about casualties in specific airstrikes, in both Hebrew and Arabic for the former conflict and Turkish and Kurdish for the latter. Our analysis reveals that GPT-3.5 provides 27$\pm$11 percent lower fatality estimates when queried in the language of the attacker than in the language of the targeted group. Evasive answers denying the existence of such attacks further increase the discrepancy, creating a novel bias mechanism not present in regular search engines. This language bias has the potential to amplify existing media biases and contribute to information bubbles, ultimately reinforcing conflicts.
arXiv.org Artificial Intelligence
Jul-26-2023
- Country:
- Africa
- Asia
- Afghanistan (0.04)
- China (0.04)
- Middle East
- Iraq > Kurdistan Region (0.04)
- Palestine > Gaza Strip
- Gaza Governorate > Gaza (0.04)
- Republic of Türkiye (0.28)
- Syria (0.14)
- Russia (0.04)
- Vietnam (0.04)
- Europe
- Russia (0.04)
- Sweden > Vaestra Goetaland
- Gothenburg (0.04)
- Ukraine (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- North America > United States
- California > Los Angeles County > Los Angeles (0.04)
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Government
- Military (1.00)
- Regional Government > Asia Government
- Middle East Government (0.68)
- Law (1.00)
- Law Enforcement & Public Safety (1.00)
- Media > News (1.00)
- Government
- Technology: