Goto

Collaborating Authors

 Belief Revision


Compact Belief State Representation for Task Planning

arXiv.org Artificial Intelligence

Task planning in a probabilistic belief state domains allows generating complex and robust execution policies in those domains affected by state uncertainty. The performance of a task planner relies on the belief state representation. However, current belief state representation becomes easily intractable as the number of variables and execution time grows. To address this problem, we developed a novel belief state representation based on cartesian product and union operations over belief substates. These two operations and single variable assignment nodes form And-Or directed acyclic graph of Belief State (AOBS). We show how to apply actions with probabilistic outcomes and measure the probability of conditions holding over belief state. We evaluated AOBS performance in simulated forward state space exploration. We compared the size of AOBS with the size of Binary Decision Diagrams (BDD) that were previously used to represent belief state. We show that AOBS representation is not only much more compact than a full belief state but it also scales better than BDD for most of the cases.


How to Do Things with Words: A Bayesian Approach

Journal of Artificial Intelligence Research

Communication changes the beliefs of the listener and of the speaker. The value of a communicative act stems from the valuable belief states which result from this act. To model this we build on the Interactive POMDP (IPOMDP) framework, which extends POMDPs to allow agents to model others in multi-agent settings, and we include communication that can take place between the agents to formulate Communicative IPOMDPs (CIPOMDPs). We treat communication as a type of action and therefore, decisions regarding communicative acts are based on decision-theoretic planning using the Bellman optimality principle and value iteration, just as they are for all other rational actions. As in any form of planning, the results of actions need to be precisely specified. We use the Bayes' theorem to derive how agents update their beliefs in CIPOMDPs; updates are due to agents' actions, observations, messages they send to other agents, and messages they receive from others. The Bayesian decision-theoretic approach frees us from the commonly made assumption of cooperative discourse - we consider agents which are free to be dishonest while communicating and are guided only by their selfish rationality. We use a simple Tiger game to illustrate the belief update, and to show that the ability to rationally communicate allows agents to improve efficiency of their interactions.


Belief Propagation Neural Networks

arXiv.org Machine Learning

Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameterized operators that operate on factor graphs and generalize Belief Propagation (BP). In its strictest form, a BPNN layer (BPNN-D) is a learned iterative operator that provably maintains many of the desirable properties of BP for any choice of the parameters. Empirically, we show that by training BPNN-D learns to perform the task better than the original BP: it converges 1.7x faster on Ising models while providing tighter bounds. On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods, while returning an estimate of comparable quality.


Revision by Conditionals: From Hook to Arrow

arXiv.org Artificial Intelligence

The belief revision literature has largely focussed on the issue of how to revise one's beliefs in the light of information regarding matters of fact. Here we turn to an important but comparatively neglected issue: How might one extend a revision operator to handle conditionals as input? Our approach to this question of 'conditional revision' is distinctive insofar as it abstracts from the controversial details of how to revise by factual sentences. We introduce a 'plug and play' method for uniquely extending any iterated belief revision operator to the conditional case. The flexibility of our approach is achieved by having the result of a conditional revision by a Ramsey Test conditional ('arrow') determined by that of a plain revision by its corresponding material conditional ('hook'). It is shown to satisfy a number of new constraints that are of independent interest.


$\alpha$ Belief Propagation for Approximate Inference

arXiv.org Machine Learning

Belief propagation (BP) algorithm is a widely used message-passing method for inference in graphical models. BP on loop-free graphs converges in linear time. But for graphs with loops, BP's performance is uncertain, and the understanding of its solution is limited. To gain a better understanding of BP in general graphs, we derive an interpretable belief propagation algorithm that is motivated by minimization of a localized $\alpha$-divergence. We term this algorithm as $\alpha$ belief propagation ($\alpha$-BP). It turns out that $\alpha$-BP generalizes standard BP. In addition, this work studies the convergence properties of $\alpha$-BP. We prove and offer the convergence conditions for $\alpha$-BP. Experimental simulations on random graphs validate our theoretical results. The application of $\alpha$-BP to practical problems is also demonstrated.


Moore's Paradox and the logic of belief

arXiv.org Artificial Intelligence

Moores Paradox is a test case for any formal theory of belief. In Knowledge and Belief, Hintikka developed a multimodal logic for statements that express sentences containing the epistemic notions of knowledge and belief. His account purports to offer an explanation of the paradox. In this paper I argue that Hintikkas interpretation of one of the doxastic operators is philosophically problematic and leads to an unnecessarily strong logical system. I offer a weaker alternative that captures in a more accurate way our logical intuitions about the notion of belief without sacrificing the possibility of providing an explanation for problematic cases such as Moores Paradox.


Generalized Ranking Kinematics for Iterated Belief Revision

AAAI Conferences

Probability kinematics is a leading paradigm in probabilistic belief change. It is based on the idea that conditional beliefs should be independent from changes of their antecedents' probabilities. In this paper, we propose a re-interpretation of this paradigm for Spohn's ranking functions which we call Generalized Ranking Kinematics as a new principle for iterated belief revision of ranking functions by sets of conditional beliefs. This general setting also covers iterated revision by propositional beliefs. We then present c-revisions as belief change methodology that satisfies Generalized Ranking Kinematics.


Towards the Role of Theory of Mind in Explanation

arXiv.org Artificial Intelligence

Theory of Mind is commonly defined as the ability to attribute mental states (e.g., beliefs, goals) to oneself, and to others. A large body of previous work - from the social sciences to artificial intelligence - has observed that Theory of Mind capabilities are central to providing an explanation to another agent or when explaining that agent's behaviour. In this paper, we build and expand upon previous work by providing an account of explanation in terms of the beliefs of agents and the mechanism by which agents revise their beliefs given possible explanations. We further identify a set of desiderata for explanations that utilize Theory of Mind. These desiderata inform our belief-based account of explanation.


Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

Neural Information Processing Systems

Belief propagation is a fundamental message-passing algorithm for probabilistic reasoning and inference in graphical models. While it is known to be exact on trees, in most applications belief propagation is run on graphs with cycles. Understanding the behavior of loopy'' belief propagation has been a major challenge for researchers in machine learning, and several positive convergence results for BP are known under strong assumptions which imply the underlying graphical model exhibits decay of correlations. We show that under a natural initialization, BP converges quickly to the global optimum of the Bethe free energy for Ising models on arbitrary graphs, as long as the Ising model is \emph{ferromagnetic} (i.e. This holds even though such models can exhibit long range correlations and may have multiple suboptimal BP fixed points.


Neural Enhanced Belief Propagation on Factor Graphs

arXiv.org Machine Learning

A graphical model is a structured representation of locally dependent random variables. A traditional method to reason over these random variables is to perform inference using belief propagation. When provided with the true data generating process, belief propagation can infer the optimal posterior probability estimates in tree structured factor graphs. However, in many cases we may only have access to a poor approximation of the data generating process, or we may face loops in the factor graph, leading to suboptimal estimates. In this work we first extend graph neural networks to factor graphs (FG-GNN). We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation. The FG-GNN receives as input messages from belief propagation at every inference iteration and outputs a corrected version of them. As a result, we obtain a more accurate algorithm that combines the benefits of both belief propagation and graph neural networks. We apply our ideas to error correction decoding tasks, and we show that our algorithm can outperform belief propagation for LDPC codes on bursty channels.