Goto

Collaborating Authors

 cognitive dissonance


Is Cognitive Dissonance Actually a Thing?

The New Yorker

Is Cognitive Dissonance Actually a Thing? In 1934, an 8.0-magnitude earthquake hit eastern India, killing thousands and devastating several cities. Curiously, in areas that were spared the worst destruction, stories soon spread that an even bigger disaster was on its way. Leon Festinger, a young American psychologist at the University of Minnesota, read about these rumors in the early nineteen-fifties and was puzzled. Festinger didn't think people would voluntarily adopt anxiety-inducing ideas. Instead, he reasoned, the rumors could better be described as "anxiety justifying." Some had felt the earth shake and were overwhelmed with fear. When the outcome--they were spared--didn't match their emotions, they embraced predictions that affirmed their fright.


AI Through the Human Lens: Investigating Cognitive Theories in Machine Psychology

Kundu, Akash, Goswami, Rishika

arXiv.org Artificial Intelligence

We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluated several proprietary and open-source models using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety


Biased by Design: Leveraging AI Biases to Enhance Critical Thinking of News Readers

Zavolokina, Liudmila, Sprenkamp, Kilian, Katashinskaya, Zoya, Jones, Daniel Gordon

arXiv.org Artificial Intelligence

This paper explores the design of a propaganda detection tool using Large Language Models (LLMs). Acknowledging the inherent biases in AI models, especially in political contexts, we investigate how these biases might be leveraged to enhance critical think ing in news consumption. Countering the typical view of AI biases as detrimental, our research proposes strategies of user choice and personalization in response to a user's political stance, applying psychological concepts of confirmation bias and cogniti ve dissonance.


Helen Oyeyemi's Novel of Cognitive Dissonance

The New Yorker

Few fantasies are harder to wipe away than the romance of a clean slate. Every January, when we're twitchy with regret and self-loathing, advertisers blare, "New Year, new you," urging us to jettison our failures and start fresh. In fiction, self-reinvention is a perennial theme, often shadowed by the suspicion that it can't be done. Lately, novelists have put a political spin on the idea, counterposing hopeful acts of individual self-fashioning to the immovable weight of circumstance. Halle Butler's "The New Me" (2019), a millennial office satire, finds its temp heroine, Millie, trying to life-hack her way out of loneliness and professional drift--buy a plant, whiten her teeth, make friends, think positive.


Anti Robot Speciesism

De Freitas, Julian, Castelo, Noah, Schmitt, Bernd, Sarvary, Miklos

arXiv.org Artificial Intelligence

DATE SUBMITTED: March, 202 5 Words: 9, 22 0 2 Abstract H umanoid robots are a form of embodied artificial intelligence (AI) that look s and act s more and more like humans. Powered by generative AI and advances in robotics, humanoid robots can speak and interact with humans rather naturally but are still easily recognizable as robots. But how will we treat humanoids when they seem indistinguishable from humans in appearance and mind? We find a tendency (called "anti - robot" speciesism) to deny such robots humanlike capabilities, driven by motivations to accord members of the human species preferential treatment . Six experiments show that robots are denied humanlike attributes, simply because they are not biological beings and because humans want to avoid feelings of cognitive dissonance when utilizing such robots for unsavory tasks . Th us, pe ople do not rationally attribute capabilities to perfectly human like robots but deny them capabilities as it suits them . Keywords: robots, artificial intelligence, humanoids, speciesism, cognitive dissonance 3 In recent years, n ew artificial intelligen ce (AI) technologies have been introduced into the marketplace that have the potential to radically change people's work and lives . This paper examines how people might react to robots that seem be " perfectly human like " . With major companies like Amazon and Nvidia planning mass production of such robots, we are entering an era where the line between human and non - human entities is increasingly blurred. Our findings suggest that the advent of such robots will not lead people to rationally conclude that these robots are as capable as humans in performing some tasks . Rather, people will deny these robots humanlike attributes, driven by their motivation to prioritize their own species and to avoid feelings of cognitive dissonance from utilizing such robots for unsavory tasks. Aversion to Robots and AI People are often averse to robots. P sychological research has explained this effect by arguing that such "almost humanlike" robots appear as aesthetically dis pleasing, and that they remind people of zombies, death, or disease (Kätsyri et al., 2015; Mori, 1970; Wang et al., 2015) . Other psychological explanations focus on how people perceive robot minds, sometimes referred to as the "uncanny valley of mind" (Müller et al., 2021; Stein & Ohler, 2017) . These theories suggest that humanoid robots can be unsettling because they remind people of the human ability to experience feelings, even though these robots are not seen as having such capabilities (Gray & Wegner, 2012; Smith et al., 2021) .


Addressing and Visualizing Misalignments in Human Task-Solving Trajectories

Kim, Sejin, Lee, Hosung, Kim, Sundong

arXiv.org Artificial Intelligence

The effectiveness of AI model training hinges on the quality of the trajectory data used, particularly in aligning the model's decision with human intentions. However, in the human task-solving trajectories, we observe significant misalignments between human intentions and the recorded trajectories, which can undermine AI model training. This paper addresses the challenges of these misalignments by proposing a visualization tool and a heuristic algorithm designed to detect and categorize discrepancies in trajectory data. Although the heuristic algorithm requires a set of predefined human intentions to function, which we currently cannot extract, the visualization tool offers valuable insights into the nature of these misalignments. We expect that eliminating these misalignments could significantly improve the utility of trajectory data for AI model training. We also propose that future work should focus on developing methods, such as Topic Modeling, to accurately extract human intentions from trajectory data, thereby enhancing the alignment between user actions and AI learning processes.


Is it me, or is A larger than B: Uncovering the determinants of relational cognitive dissonance resolution

Barak, Tomer, Loewenstein, Yonatan

arXiv.org Artificial Intelligence

This study explores the computational mechanisms underlying the resolution of cognitive dissonances. We focus on scenarios in which an observation violates the expected relationship between objects. For instance, an agent expects object A to be smaller than B in some feature space but observes the opposite. One solution is to adjust the expected relationship according to the new observation and change the expectation to A being larger than B. An alternative solution would be to adapt the representation of A and B in the feature space such that in the new representation, the relationship that A is smaller than B is maintained. While both pathways resolve the dissonance, they generalize differently to different tasks. Using Artificial Neural Networks (ANNs) capable of relational learning, we demonstrate the existence of these two pathways and show that the chosen pathway depends on the dissonance magnitude. Large dissonances alter the representation of the objects, while small dissonances lead to adjustments in the expected relationships. We show that this effect arises from the inherently different learning dynamics of relationships and representations and study the implications.


Moral Agency in Silico: Exploring Free Will in Large Language Models

Porter, Morgan S.

arXiv.org Artificial Intelligence

This study investigates the potential of deterministic systems, specifically large language models (LLMs), to exhibit the functional capacities of moral agency and compatibilist free will. We develop a functional definition of free will grounded in Dennett's compatibilist framework, building on an interdisciplinary theoretical foundation that integrates Shannon's information theory, Dennett's compatibilism, and Floridi's philosophy of information. This framework emphasizes the importance of reason-responsiveness and value alignment in determining moral responsibility rather than requiring metaphysical libertarian free will. Shannon's theory highlights the role of processing complex information in enabling adaptive decision-making, while Floridi's philosophy reconciles these perspectives by conceptualizing agency as a spectrum, allowing for a graduated view of moral status based on a system's complexity and responsiveness. Our analysis of LLMs' decision-making in moral dilemmas demonstrates their capacity for rational deliberation and their ability to adjust choices in response to new information and identified inconsistencies. Thus, they exhibit features of a moral agency that align with our functional definition of free will. These results challenge traditional views on the necessity of consciousness for moral responsibility, suggesting that systems with self-referential reasoning capacities can instantiate degrees of free will and moral reasoning in artificial and biological contexts. This study proposes a parsimonious framework for understanding free will as a spectrum that spans artificial and biological systems, laying the groundwork for further interdisciplinary research on agency and ethics in the artificial intelligence era.


Foot In The Door: Understanding Large Language Model Jailbreaking via Cognitive Psychology

Wang, Zhenhua, Xie, Wei, Wang, Baosheng, Wang, Enze, Gui, Zhiwen, Ma, Shuoyoucheng, Chen, Kai

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have gradually become the gateway for people to acquire new knowledge. However, attackers can break the model's security protection ("jail") to access restricted information, which is called "jailbreaking." Previous studies have shown the weakness of current LLMs when confronted with such jailbreaking attacks. Nevertheless, comprehension of the intrinsic decision-making mechanism within the LLMs upon receipt of jailbreak prompts is noticeably lacking. Our research provides a psychological explanation of the jailbreak prompts. Drawing on cognitive consistency theory, we argue that the key to jailbreak is guiding the LLM to achieve cognitive coordination in an erroneous direction. Further, we propose an automatic black-box jailbreaking method based on the Foot-in-the-Door (FITD) technique. This method progressively induces the model to answer harmful questions via multi-step incremental prompts. We instantiated a prototype system to evaluate the jailbreaking effectiveness on 8 advanced LLMs, yielding an average success rate of 83.9%. This study builds a psychological perspective on the explanatory insights into the intrinsic decision-making logic of LLMs.


Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation

Berberette, Elijah, Hutchins, Jack, Sadovnik, Amir

arXiv.org Artificial Intelligence

In recent years, large language models (LLMs) have become incredibly popular, with ChatGPT for example being used by over a billion users. While these models exhibit remarkable language understanding and logical prowess, a notable challenge surfaces in the form of "hallucinations." This phenomenon results in LLMs outputting misinformation in a confident manner, which can lead to devastating consequences with such a large user base. However, we question the appropriateness of the term "hallucination" in LLMs, proposing a psychological taxonomy based on cognitive biases and other psychological phenomena. Our approach offers a more fine-grained understanding of this phenomenon, allowing for targeted solutions. By leveraging insights from how humans internally resolve similar challenges, we aim to develop strategies to mitigate LLM hallucinations. This interdisciplinary approach seeks to move beyond conventional terminology, providing a nuanced understanding and actionable pathways for improvement in LLM reliability.