Echoes of Power: Investigating Geopolitical Bias in US and China Large Language Models
Pacheco, Andre G. C., Cavalini, Athus, Comarela, Giovanni
–arXiv.org Artificial Intelligence
In particular, the ChatGPT model (GPT-3.5 and GPT-4) [1] has demonstrated its potential to generate human-like conversational abilities, enabling it to engage in meaningful dialogues, answer questions, and generate text across a wide range of topics, including science, entertainment, and politics [13, 14, 20]. The ability of these models to generate coherent and contextually relevant text has made them a powerful tool for content creation and enabling new ways of human-machine interactions. Despite their potential benefits, the widespread adoption of LLMs has raised concerns about their potential misuse, particularly in generating disinformation [16, 23, 25], fake news [11, 27], and hate speech [10, 22]. Beyond these widely recognized concerns, another critical issue has gained increasing attention in recent months: the potential of these models to manipulate public opinion, both due to the inherent biases embedded in their training process and the biases deliberately introduced or reinforced by their developers or maintainers. The most modern LLMs designed to interact with humans are generally trained using at least two phases. First, they are trained on large-scale text corpora, which inevitably incorporate the ideological, cultural, and political perspectives present in the source.
arXiv.org Artificial Intelligence
Mar-20-2025
- Country:
- Asia
- China (1.00)
- Middle East
- Europe (1.00)
- North America > United States (1.00)
- Asia
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Government
- Military (1.00)
- Regional Government
- Asia Government > China Government (0.68)
- North America Government > United States Government (0.46)
- Law > Civil Rights & Constitutional Law (0.68)
- Law Enforcement & Public Safety (0.93)
- Media > News (0.68)
- Government
- Technology: