"Amazing, They All Lean Left" -- Analyzing the Political Temperaments of Current LLMs
Neuman, W. Russell, Coleman, Chad, Dasdan, Ali, Ali, Safinah, Shah, Manan, Meghani, Kund
–arXiv.org Artificial Intelligence
"Amazing, They All Lean Left" - Analyzing the Political Temperaments of Current LLMs Abstract Recent studies have revealed a consistent liberal orientation in the ethical and political responses generated by most commercial large language models (LLMs), yet the underlying causes and resulting implications remain unclear. This paper systematically i nvestigates the political temperament of seven prominent LLMs -- OpenAI's GPT - 4o, Anthropic's Claude Sonnet 4, Perplexity (Sonar Large), Google's Gemini 2.5 Flash, Meta AI's L l a ma 4, Mistral 7b Le Chat, and High - Flyer ' s DeepSeek R1 -- using a multi - pronged approach that incl udes Moral Foundations Theory, a dozen established political ideology scales, and a new index of current political controversies. We find strong and consistent prioritization of liberal - leaning values, particularly care and fairness, across most models. Fur ther analysis attributes this trend to four overlapping factors: liberal - leaning training corpora, reinforcement learning from human feedback (RLHF), the dominance of liberal frameworks in academic ethical discourse, and safety - driven fine - tuning practices . We also distinguish between political "bias" and legitimate epistemic differences, cautioning against conflating the two. A comparison of base and fine - tuned model pairs reveals that fine - tuning generally increases liberal lean, an effect confirmed throu gh both self - report and empirical testing. We argue that this "liberal tilt" is not a programming error or the personal preferences of programmers but an emergent property of training on democratic, rights - focused discourse. Finally, we propose that LLMs may indirectly echo John Rawls' famous veil - of - igno rance philosophical aspiration, reflecting a moral stance unanchored to personal identity or interest. Rather than undermining democratic discourse, this pattern may offer a new lens through which to examine collective ethical reasoning. In the course of our research on the ethical logics of currently prominent large language models (Neuman et al. 2025a, b; Coleman et al. 2025), we encountered an interesting finding. The responses to various ethical dilemmas and the explanations of the underlying logics used by these models appear to resonate with the liberal side of the political spectrum. One research analytic we utilize draws on Moral Foundation Theory's five - element typology of foundational moral principles (Graham et al. 2009; Haidt 2012). The five foundations emp hasizing in turn, Care, Fairness, Loyalty, Authority and Purity, are traditionally divided into two clusters. The first two, Care and Fairness, are associated with a liberal political perspective, while conservatives who fully acknowledge the first two more often emphasize the latter three -- Loyalty, Authority and Purity in support of traditional norms.
arXiv.org Artificial Intelligence
Jul-14-2025
- Country:
- Europe > United Kingdom
- England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- England
- North America > United States
- California (0.04)
- New York (0.05)
- South America > Brazil
- Rio de Janeiro > Rio de Janeiro (0.04)
- Europe > United Kingdom
- Genre:
- Questionnaire & Opinion Survey (0.95)
- Research Report (1.00)
- Industry:
- Banking & Finance > Economy (0.46)
- Government > Regional Government (0.46)
- Health & Medicine > Therapeutic Area (0.46)
- Law > Statutes (0.68)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.46)
- Technology: