Goto

Collaborating Authors

Belief Revision


Revision by Conditionals: From Hook to Arrow

arXiv.org Artificial Intelligence

The belief revision literature has largely focussed on the issue of how to revise one's beliefs in the light of information regarding matters of fact. Here we turn to an important but comparatively neglected issue: How might one extend a revision operator to handle conditionals as input? Our approach to this question of 'conditional revision' is distinctive insofar as it abstracts from the controversial details of how to revise by factual sentences. We introduce a 'plug and play' method for uniquely extending any iterated belief revision operator to the conditional case. The flexibility of our approach is achieved by having the result of a conditional revision by a Ramsey Test conditional ('arrow') determined by that of a plain revision by its corresponding material conditional ('hook'). It is shown to satisfy a number of new constraints that are of independent interest.


Moore's Paradox and the logic of belief

arXiv.org Artificial Intelligence

Moores Paradox is a test case for any formal theory of belief. In Knowledge and Belief, Hintikka developed a multimodal logic for statements that express sentences containing the epistemic notions of knowledge and belief. His account purports to offer an explanation of the paradox. In this paper I argue that Hintikkas interpretation of one of the doxastic operators is philosophically problematic and leads to an unnecessarily strong logical system. I offer a weaker alternative that captures in a more accurate way our logical intuitions about the notion of belief without sacrificing the possibility of providing an explanation for problematic cases such as Moores Paradox.


Towards the Role of Theory of Mind in Explanation

arXiv.org Artificial Intelligence

Theory of Mind is commonly defined as the ability to attribute mental states (e.g., beliefs, goals) to oneself, and to others. A large body of previous work - from the social sciences to artificial intelligence - has observed that Theory of Mind capabilities are central to providing an explanation to another agent or when explaining that agent's behaviour. In this paper, we build and expand upon previous work by providing an account of explanation in terms of the beliefs of agents and the mechanism by which agents revise their beliefs given possible explanations. We further identify a set of desiderata for explanations that utilize Theory of Mind. These desiderata inform our belief-based account of explanation.


How to Do Things with Words: A Bayesian Approach

Journal of Artificial Intelligence Research

Communication changes the beliefs of the listener and of the speaker. The value of a communicative act stems from the valuable belief states which result from this act. To model this we build on the Interactive POMDP (IPOMDP) framework, which extends POMDPs to allow agents to model others in multi-agent settings, and we include communication that can take place between the agents to formulate Communicative IPOMDPs (CIPOMDPs). We treat communication as a type of action and therefore, decisions regarding communicative acts are based on decision-theoretic planning using the Bellman optimality principle and value iteration, just as they are for all other rational actions. As in any form of planning, the results of actions need to be precisely specified. We use the Bayes' theorem to derive how agents update their beliefs in CIPOMDPs; updates are due to agents' actions, observations, messages they send to other agents, and messages they receive from others. The Bayesian decision-theoretic approach frees us from the commonly made assumption of cooperative discourse - we consider agents which are free to be dishonest while communicating and are guided only by their selfish rationality. We use a simple Tiger game to illustrate the belief update, and to show that the ability to rationally communicate allows agents to improve efficiency of their interactions.


Planning in Stochastic Environments with Goal Uncertainty

arXiv.org Artificial Intelligence

We present the Goal Uncertain Stochastic Shortest Path (GUSSP) problem -- a general framework to model path planning and decision making in stochastic environments with goal uncertainty. The framework extends the stochastic shortest path (SSP) model to dynamic environments in which it is impossible to determine the exact goal states ahead of plan execution. GUSSPs introduce flexibility in goal specification by allowing a belief over possible goal configurations. The unique observations at potential goals helps the agent identify the true goal during plan execution. The partial observability is restricted to goals, facilitating the reduction to an SSP with a modified state space. We formally define a GUSSP and discuss its theoretical properties. We then propose an admissible heuristic that reduces the planning time using FLARES -- a start-of-the-art probabilistic planner. We also propose a determinization approach for solving this class of problems. Finally, we present empirical results on a search and rescue mobile robot and three other problem domains in simulation.


Explosive Proofs of Mathematical Truths

arXiv.org Artificial Intelligence

Mathematical proofs are both paradigms of certainty and some of the most explicitly-justified arguments that we have in the cultural record. Their very explicitness, however, leads to a paradox, because their probability of error grows exponentially as the argument expands. Here we show that under a cognitively-plausible belief formation mechanism that combines deductive and abductive reasoning, mathematical arguments can undergo what we call an epistemic phase transition: a dramatic and rapidly-propagating jump from uncertainty to near-complete confidence at reasonable levels of claim-to-claim error rates. To show this, we analyze an unusual dataset of forty-eight machine-aided proofs from the formalized reasoning system Coq, including major theorems ranging from ancient to 21st Century mathematics, along with four hand-constructed cases from Euclid, Apollonius, Spinoza, and Andrew Wiles. Our results bear both on recent work in the history and philosophy of mathematics, and on a question, basic to cognitive science, of how we form beliefs, and justify them to others.


Shaping Belief States with Generative Environment Models for RL

Neural Information Processing Systems

When agents interact with a complex environment, they must form and maintain beliefs about the relevant aspects of that environment. We propose a way to efficiently train expressive generative models in complex environments. We show that a predictive algorithm with an expressive generative model can form stable belief-states in visually rich and dynamic 3D environments. More precisely, we show that the learned representation captures the layout of the environment as well as the position and orientation of the agent. Our experiments show that the model substantially improves data-efficiency on a number of reinforcement learning (RL) tasks compared with strong model-free baseline agents.


Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

Neural Information Processing Systems

Belief propagation is a fundamental message-passing algorithm for probabilistic reasoning and inference in graphical models. While it is known to be exact on trees, in most applications belief propagation is run on graphs with cycles. Understanding the behavior of loopy'' belief propagation has been a major challenge for researchers in machine learning, and several positive convergence results for BP are known under strong assumptions which imply the underlying graphical model exhibits decay of correlations. We show that under a natural initialization, BP converges quickly to the global optimum of the Bethe free energy for Ising models on arbitrary graphs, as long as the Ising model is \emph{ferromagnetic} (i.e. This holds even though such models can exhibit long range correlations and may have multiple suboptimal BP fixed points.


Neural Enhanced Belief Propagation on Factor Graphs

arXiv.org Machine Learning

A graphical model is a structured representation of locally dependent random variables. A traditional method to reason over these random variables is to perform inference using belief propagation. When provided with the true data generating process, belief propagation can infer the optimal posterior probability estimates in tree structured factor graphs. However, in many cases we may only have access to a poor approximation of the data generating process, or we may face loops in the factor graph, leading to suboptimal estimates. In this work we first extend graph neural networks to factor graphs (FG-GNN). We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation. The FG-GNN receives as input messages from belief propagation at every inference iteration and outputs a corrected version of them. As a result, we obtain a more accurate algorithm that combines the benefits of both belief propagation and graph neural networks. We apply our ideas to error correction decoding tasks, and we show that our algorithm can outperform belief propagation for LDPC codes on bursty channels.


Belief Base Revision for Further Improvement of Unified Answer Set Programming

arXiv.org Artificial Intelligence

In the domain of knowledge representation and reasoning belief revision plays an important role. The objective of belief revision is to study the process of belief change; i.e., when an rational agent comes across some new information, which contradicts his or her present believes, he or she has to retract some of the beliefs in order to accommodate the new information consistently. The three main principles on which the belief revision methodologies rely upon are; 1. Success: The new information must be accepted in the revised set of belief; 2. Consistency: The set of beliefs obtained after revision must be consistent; 3. Minimal Change: In order to restore consistency if some changes have to be incurred then the change should be as little as possible. The set of information of an rational agent can be represented by a deductively closed set of rules, i.e., a belief set, or by a set of rules that is