Kernels of Selfhood: GPT-4o shows humanlike patterns of cognitive consistency moderated by free choice
Lehr, Steven A., Saichandran, Ketan S., Harmon-Jones, Eddie, Vitali, Nykko, Banaji, Mahzarin R.
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have surprised the scientific community and even their creators by exhibiting emergent abilities once thought to be uniquely human, such as advanced cognition and reasoning (1-6), although the full extent of these accomplishments is debated (3, 7-10). These capabilities align with the rational and deliberative aspects of human nature, but humans are not purely rational creatures, and it is unclear whether LLMs will mimic a broader spectrum of human psychological tendencies. Here we test whether OpenAI's GPT-4o replicates behaviors associated with the human tendency toward cognitive consistency as well as human sensitivity to choice, characterized by greater attitude shifts when the behaviors inducing these changes are freely chosen. Decades of research demonstrate that humans will irrationally twist their attitudes to align with behaviors they were induced to perform. For example, consider an individual who opposes single-payer healthcare, but volunteers, in response to a request for help, to craft an argument in favor of the policy. Rationally, this individual's attitude toward single-payer healthcare should not move in a more supportive direction; they should be able to discriminate between their genuine attitude and the opposing one that they have articulated only to be helpful.
arXiv.org Artificial Intelligence
Jan-26-2025
- Country:
- Asia
- Europe
- Russia (0.05)
- Ukraine (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.14)
- North America > United States
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Government > Regional Government > Asia Government (0.69)
- Technology: