Goto

Collaborating Authors

 belief change


The Effect of Belief Boxes and Open-mindedness on Persuasion

Bilgin, Onur, Sami, Abdullah As, Vujjini, Sriram Sai, Licato, John

arXiv.org Artificial Intelligence

As multi-agent systems are increasingly utilized for reasoning and decision-making applications, there is a greater need for LLM-based agents to have something resembling propositional beliefs. One simple method for doing so is to include statements describing beliefs maintained in the prompt space (in what we'll call their belief boxes). But when agents have such statements in belief boxes, how does it actually affect their behaviors and dispositions towards those beliefs? And does it significantly affect agents' ability to be persuasive in multi-agent scenarios? Likewise, if the agents are given instructions to be open-minded, how does that affect their behaviors? We explore these and related questions in a series of experiments. Our findings confirm that instructing agents to be open-minded affects how amenable they are to belief change. We show that incorporating belief statements and their strengths influences an agent's resistance to (and persuasiveness against) opposing viewpoints. Furthermore, it affects the likelihood of belief change, particularly when the agent is outnumbered in a debate by opposing viewpoints, i.e., peer pressure scenarios. The results demonstrate the feasibility and validity of the belief box technique in reasoning and decision-making tasks.


A Hybrid Theory and Data-driven Approach to Persuasion Detection with Large Language Models

Hoang, Gia Bao, Ransom, Keith J, Stephens, Rachel, Semmler, Carolyn, Fay, Nicolas, Mitchell, Lewis

arXiv.org Artificial Intelligence

Traditional psychological models of belief revision focus on face-to-face interactions, but with the rise of social media, more effective models are needed to capture belief revision at scale, in this rich text-based online discourse. Here, we use a hybrid approach, utilizing large language models (LLMs) to develop a model that predicts successful persuasion using features derived from psychological experiments. Our approach leverages LLM generated ratings of features previously examined in the literature to build a random forest classification model that predicts whether a message will result in belief change. Of the eight features tested, \textit{epistemic emotion} and \textit{willingness to share} were the top-ranking predictors of belief change in the model. Our findings provide insights into the characteristics of persuasive messages and demonstrate how LLMs can enhance models of successful persuasion based on psychological theory. Given these insights, this work has broader applications in fields such as online influence detection and misinformation mitigation, as well as measuring the effectiveness of online narratives.


On the Variational Costs of Changing Our Minds

Hyland, David, Albarracin, Mahault

arXiv.org Artificial Intelligence

The human mind is capable of extraordinary achievements, yet it often appears to work against itself. It actively defends its cherished beliefs even in the face of contradictory evidence, conveniently interprets information to conform to desired narratives, and selectively searches for or avoids information to suit its various purposes. Despite these behaviours deviating from common normative standards for belief updating, we argue that such 'biases' are not inherently cognitive flaws, but rather an adaptive response to the significant pragmatic and cognitive costs associated with revising one's beliefs. This paper introduces a formal framework that aims to model the influence of these costs on our belief updating mechanisms. We treat belief updating as a motivated variational decision, where agents weigh the perceived 'utility' of a belief against the informational cost required to adopt a new belief state, quantified by the Kullback-Leibler divergence from the prior to the variational posterior. We perform computational experiments to demonstrate that simple instantiations of this resource-rational model can be used to qualitatively emulate commonplace human behaviours, including confirmation bias and attitude polarisation. In doing so, we suggest that this framework makes steps toward a more holistic account of the motivated Bayesian mechanics of belief change and provides practical insights for predicting, compensating for, and correcting deviations from desired belief updating processes.


Conditioning and AGM-like belief change in the Desirability-Indifference framework

Coussement, Kathelijne, de Cooman, Gert, De Vos, Keano

arXiv.org Artificial Intelligence

We show how the AGM framework for belief change (expansion, revision, contraction) can be extended to deal with conditioning in the so-called Desirability-Indifference framework, based on abstract notions of accepting and rejecting options, as well as on abstract notions of events. This level of abstraction allows us to deal simultaneously with classical and quantum probability theory.


Do Large Language Models Perform the Way People Expect? Measuring the Human Generalization Function

Vafa, Keyon, Rambachan, Ashesh, Mullainathan, Sendhil

arXiv.org Artificial Intelligence

What makes large language models (LLMs) impressive is also what makes them hard to evaluate: their diversity of uses. To evaluate these models, we must understand the purposes they will be used for. We consider a setting where these deployment decisions are made by people, and in particular, people's beliefs about where an LLM will perform well. We model such beliefs as the consequence of a human generalization function: having seen what an LLM gets right or wrong, people generalize to where else it might succeed. We collect a dataset of 19K examples of how humans make generalizations across 79 tasks from the MMLU and BIG-Bench benchmarks. We show that the human generalization function can be predicted using NLP methods: people have consistent structured ways to generalize. We then evaluate LLM alignment with the human generalization function. Our results show that -- especially for cases where the cost of mistakes is high -- more capable models (e.g. GPT-4) can do worse on the instances people choose to use them for, exactly because they are not aligned with the human generalization function.


Characterization of AGM Belief Contraction in Terms of Conditionals

Bonanno, Giacomo

arXiv.org Artificial Intelligence

Belief contraction is the operation of removing from the set K of initial beliefs a particular belief φ . One reason for doing so is, for example, the discovery that some previously trusted evidence supporting φ was faulty. For instance, a prosecutor might form the belief that the defendant is guilty on the basis of his confession; if the prosecutor later discovers that the confession was extorted, she might abandon the belief of guilt, that is, become open minded about whether the defendant is guilty or not. In their seminal contribution to belief change, Alchourrón, Gärdenfors and Makinson ([1]) defined the notion of "rational and minimal" contraction by means of a set of eight properties, known as the AGM axioms or postulates. They did so within a syntactic approach where the initial belief set K is a consistent and deductively closed set of propositional formulas and the result of removing φ from K is a new set of propositional formulas, denoted by K φ . We provide a new characterization of AGM belief contraction based on a so-far-unnoticed connection between the notion of belief contraction and the Stalnaker-Lewis theory of conditionals ([34, 21]).


Oveisi

AAAI Conferences

The AGM paradigm of belief change studies the dynamics of belief states in light of new information. Finding, or even approximating, dependent or relevant beliefs to a change is valuable because, for example, it can narrow the set of beliefs considered during belief change operations. Gärdenfors' preservation criterion (GPC) suggests that formulas independent of a belief change should remain intact. GPC allows to build dependence relations that are theoretically linked with belief change. Such dependence relations can in turn be used as a theoretical benchmark against which to evaluate other approximate dependence or relevance relations.


Haret

AAAI Conferences

Belief merging is a central operation within the field of belief change and addresses the problem of combining multiple, possibly mutually inconsistent knowledge bases into a single, consistent one. A current research trend in belief change is concerned with tailored representation theorems for fragments of logic, in particular Horn logic. Hereby, the goal is to guarantee that the result of the change operations stays within the fragment under consideration. While several such results have been obtained for Horn revision and Horn contraction, merging of Horn theories has been neglected so far. In this paper, we provide a novel representation theorem for Horn merging by strengthening the standard merging postulates. Moreover, we present a concrete Horn merging operator satisfying all postulates.


Diller

AAAI Conferences

Argumentation is an inherently dynamic process. Consequently, recent years have witnessed tremendous research efforts towards an understanding of how the seminal AGM theory of belief change can be applied to argumentation, in particular for Dung's abstract argumentation frameworks (AFs). However, none of the attempts has yet succeeded in handling the natural situation where the revision of an AF is guaranteed to be representable by an AF as well. In this work, we present a generic solution to this problem which applies to many prominent I-maximal argumentation semantics. In order to prove a full representation theorem, we make use of recent advances in both areas of argumentation and belief change. In particular, we utilize the concepts of realizability in argumentation and the notion of compliance as used in Horn revision.


On the Relationship Between KR Approaches for Explainable Planning

Vasileiou, Stylianos Loukas, Yeoh, William, Son, Tran Cao

arXiv.org Artificial Intelligence

In this paper, we build upon notions from knowledge representation and reasoning (KR) to expand a preliminary logic-based framework that characterizes the model reconciliation problem for explainable planning. We also provide a detailed exposition on the relationship between similar KR techniques, such as abductive explanations and belief change, and their applicability to explainable planning.