Belief Revision

Iterated Belief Base Revision: A Dynamic Epistemic Logic Approach Artificial Intelligence

AGM's belief revision is one of the main paradigms in the study of belief change operations. In this context, belief bases (prioritised bases) have been largely used to specify the agent's belief state - whether representing the agent's `explicit beliefs' or as a computational model for her belief state. While the connection of iterated AGM-like operations and their encoding in dynamic epistemic logics have been studied before, few works considered how well-known postulates from iterated belief revision theory can be characterised by means of belief bases and their counterpart in a dynamic epistemic logic. This work investigates how priority graphs, a syntactic representation of preference relations deeply connected to prioritised bases, can be used to characterise belief change operators, focusing on well-known postulates of Iterated Belief Change. We provide syntactic representations of belief change operators in a dynamic context, as well as new negative results regarding the possibility of representing an iterated belief revision operation using transformations on priority graphs.

A Generalisation of AGM Contraction and Revision to Fragments of First-Order Logic

Journal of Artificial Intelligence Research

AGM contraction and revision assume an underlying logic that contains propositional logic. Consequently, this assumption excludes many useful logics such as the Horn fragment of propositional logic and most description logics. Our goal in this paper is to generalise AGM contraction and revision to (near-)arbitrary fragments of classical first-order logic. To this end, we first define a very general logic that captures these fragments. In so doing, we make the modest assumptions that a logic contains conjunction and that information is expressed by closed formulas or sentences. The resulting logic is called first-order conjunctive logic or FC logic for short. We then take as the point of departure the AGM approach of constructing contraction functions through epistemic entrenchment, that is the entrenchment-based contraction. We redefine entrenchment-based contraction in ways that apply to any FC logic, which we call FC contraction. We prove a representation theorem showing its compliance with all the AGM contraction postulates except for the controversial recovery postulate. We also give methods for constructing revision functions through epistemic entrenchment which we call FC revision; which also apply to any FC logic. We show that if the underlying FC logic contains tautologies then FC revision complies with all the AGM revision postulates. Finally, in the context of FC logic, we provide three methods for generating revision functions via a variant of the Levi Identity, which we call contraction, withdrawal and cut generated revision, and explore the notion of revision equivalence. We show that withdrawal and cut generated revision coincide with FC revision and so does contraction generated revision under a finiteness condition.

Iterated Belief Revision Under Resource Constraints: Logic as Geometry Artificial Intelligence

We propose a variant of iterated belief revision designed for settings with limited computational resources, such as mobile autonomous robots. The proposed memory architecture---called the {\em universal memory architecture} (UMA)---maintains an epistemic state in the form of a system of default rules similar to those studied by Pearl and by Goldszmidt and Pearl (systems $Z$ and $Z^+$). A duality between the category of UMA representations and the category of the corresponding model spaces, extending the Sageev-Roller duality between discrete poc sets and discrete median algebras provides a two-way dictionary from inference to geometry, leading to immense savings in computation, at a cost in the quality of representation that can be quantified in terms of topological invariants. Moreover, the same framework naturally enables comparisons between different model spaces, making it possible to analyze the deficiencies of one model space in comparison to others. This paper develops the formalism underlying UMA, analyzes the complexity of maintenance and inference operations in UMA, and presents some learning guarantees for different UMA-based learners. Finally, we present simulation results to illustrate the viability of the approach, and close with a discussion of the strengths, weaknesses, and potential development of UMA-based learners.

Adams Conditioning and Likelihood Ratio Transfer Mediated Inference Artificial Intelligence

Bayesian inference as applied in a legal setting is about belief transfer and involves a plurality of agents and communication protocols. A forensic expert (FE) may communicate to a trier of fact (TOF) first its value of a certain likelihood ratio with respect to FE's belief state as represented by a probability function on FE's proposition space. Subsequently FE communicates its recently acquired confirmation that a certain evidence proposition is true. Then TOF performs likelihood ratio transfer mediated reasoning thereby revising their own belief state. The logical principles involved in likelihood transfer mediated reasoning are discussed in a setting where probabilistic arithmetic is done within a meadow, and with Adams conditioning placed in a central role.

Rethinking Epistemic Logic with Belief Bases Artificial Intelligence

We introduce a new semantics for a logic of explicit and implicit beliefs based on the concept of multi-agent belief base. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and doxastic/epistemic alternative are primitive, in our semantics they are non-primitive but are defined from the concept of belief base. We provide a complete axiomatization and prove decidability for our logic via a finite model argument. We also provide a polynomial embedding of our logic into Fagin & Halpern's logic of general awareness and establish a complexity result for our logic via the embedding.

Self-Guided Belief Propagation -- A Homotopy Continuation Method Machine Learning

We propose self-guided belief propagation (SBP) that modifies belief propagation (BP) by incorporating the pairwise potentials only gradually. This homotopy continuation method converges to a unique solution and increases the accuracy without increasing the computational burden. We apply SBP to grid graphs, complete graphs, and random graphs with random Ising potentials and show that: (i) SBP is superior in terms of accuracy whenever BP converges, and (ii) SBP obtains a unique, stable, and accurate solution whenever BP does not converge. We further provide a formal analysis to demonstrate that SBP obtains the global optimum of the Bethe approximation for attractive models with unidirectional fields.

Studies in Credibility-Limited Base Revision

AAAI Conferences

In this paper we present axiomatic characterizations for several classes of credibility-limited base revision functions and establish the interrelation among those classes. We also propose and axiomatically characterize two new base revision functions.

Specifying Plausibility Levels for Iterated Belief Change in the Situation Calculus

AAAI Conferences

We investigate augmenting a theory of belief and actions with qualitative plausibility levels. Shapiro et al. created a framework for modeling iterated belief revision and update which integrated those features with the well-developed theory of action in the situation calculus. However, applying their technique requires associating plausibility levels with initial situations, for which no very convenient mechanism had been proposed. Schwering and Lakemeyer proposed deriving these initial plausibility levels from a set of conditionals, similarly to how models are ranked in Pearl's System Z. However, their approach inherits some limitations of System Z. We consider alternatives, and argue that a perspicuous approach is to measure plausibility by counting the abnormalities in a situation (similarly to cardinality-based circumscription). By allowing abnormalities to change over time, we can also model changing plausibility levels in a natural and simple way, which gives us a flexible approach for handling belief change about predicted and unpredicted exogenous actions.

Parametrised Difference Revision

AAAI Conferences

Despite the great theoretical advancements in the area of Belief Revision, there has been limited success in terms of implementations. One of the hurdles in implementing revision operators is that their specification (let alone their computation), requires substantial resources. On the other hand, implementing a specific revision operator, like Dalal's operator, would be of limited use. In a recent paper we generalised Dalal's construction defining a whole family of concrete revision operators, called Parametrised Difference revision operators or PD operators for short. This family is wide enough to cover a whole range of different applications, and at the same time it is easy to represent. In this paper we characterise axiomatically the family of PD operators, study its computational complexity, and discuss its benefits for belief revision implementations.

Incorporating Relevance in Epistemic States in Belief Revision

AAAI Conferences

We present an account of relevance in belief revision where, intuitively, one wants to only consider the relevant part of an agent's epistemic state in a revision. We assume that relevance is a domain-specific notion, and that (ir)relevance assertions are given as part of the agent's epistemic state. Such assertions apply in a given context, and are of the form ``in the case that formula \sigma holds, the Y part of the agent's epistemic state is independent of the rest of the epistemic state'', where Y is part of the signature of the language. Two approaches are given, one in which (in semantic terms) conditions are placed on a faithful ranking on possible worlds to enforce the (ir)relevance assertions, and a second in which the possible worlds characterising the agent's beliefs may be modified in a revision. These approaches are shown to yield the same resulting belief set. Corresponding postulates and a representation result are given. The overall approach is compared to that of Parikh's for language splitting as well as with multivalued dependencies in relational databases.