Belief Revision

Adams Conditioning and Likelihood Ratio Transfer Mediated Inference Artificial Intelligence

Bayesian inference as applied in a legal setting is about belief transfer and involves a plurality of agents and communication protocols. A forensic expert (FE) may communicate to a trier of fact (TOF) first its value of a certain likelihood ratio with respect to FE's belief state as represented by a probability function on FE's proposition space. Subsequently FE communicates its recently acquired confirmation that a certain evidence proposition is true. Then TOF performs likelihood ratio transfer mediated reasoning thereby revising their own belief state. The logical principles involved in likelihood transfer mediated reasoning are discussed in a setting where probabilistic arithmetic is done within a meadow, and with Adams conditioning placed in a central role.

Rethinking Epistemic Logic with Belief Bases Artificial Intelligence

We introduce a new semantics for a logic of explicit and implicit beliefs based on the concept of multi-agent belief base. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and doxastic/epistemic alternative are primitive, in our semantics they are non-primitive but are defined from the concept of belief base. We provide a complete axiomatization and prove decidability for our logic via a finite model argument. We also provide a polynomial embedding of our logic into Fagin & Halpern's logic of general awareness and establish a complexity result for our logic via the embedding.

Self-Guided Belief Propagation -- A Homotopy Continuation Method Machine Learning

We propose self-guided belief propagation (SBP) that modifies belief propagation (BP) by incorporating the pairwise potentials only gradually. This homotopy continuation method converges to a unique solution and increases the accuracy without increasing the computational burden. We apply SBP to grid graphs, complete graphs, and random graphs with random Ising potentials and show that: (i) SBP is superior in terms of accuracy whenever BP converges, and (ii) SBP obtains a unique, stable, and accurate solution whenever BP does not converge. We further provide a formal analysis to demonstrate that SBP obtains the global optimum of the Bethe approximation for attractive models with unidirectional fields.

Propositional Belief Merging with OWA Operators

AAAI Conferences

An Ordered Weighted Averaging (OWA) operator provides a parameterized family of aggregation operators which include many of the well-known operators such as the maximum, the minimum and the mean. We introduce OWA operators as propositional belief merging operators and investigate their logical properties, as well as their relation with IC and pre-IC merging operators.

New Inference Relations from Maximal Consistent Subsets

AAAI Conferences

Given an inconsistent, flat belief base, we show how to draw non-trivial conclusions from it by selecting some of its maximal consistent subsets. This selection leads to inference relations with a stronger inferential power than the one based on all maximal consistent subsets, without questioning the fact that they are preferential relations (in the sense of KLM).

Two AGM-Style Characterizations of Model Repair

AAAI Conferences

Model repair is the problem of modifying a system model minimally in order to satisfy a desired property. The aim is to find suitable modifications that generate admissible models, representing the intended design for the system. Belief revision is a branch of the belief change theory that has been recently used to address the model repair problem. The mechanics of belief adaptation with consistency maintenance makes it a suitable theory to address the model repair problem. In this work, we propose a set of postulates of rationality with a close correspondence to the classical revision postulates. We show that the proposed set fully characterizes the admissible modifications for model repair. We also propose a second characterization of repair with easy-to-use postulates focused on structural modifications applied to models.

On Belief Promotion

AAAI Conferences

We introduce a new class of belief change operators, named promotion operators. The aim of these operators is to enhance the acceptation of a formula representing a new piece of information. We give postulates for these operators and provide a representation theorem in terms of minimal change. We also show that this class of operators is a very general one, since it captures as particular cases belief revision, commutative revision, and (essentially) belief contraction.

Towards Belief Contraction without Compactness

AAAI Conferences

In the AGM paradigm of belief change the background logic is taken to be a supra-classical logic satisfying compactness among other properties. Compactness requires that any conclusion drawn from a set of propositions X is implied by a finite subset of X. There are a number of interesting logics such as Computational Tree Logic (CTL, a temporal logic) which do not possess the compactness property, but are important from the belief change point of view. In this paper we explore AGM style belief contraction in non-compact logics as a starting point, with the expectation that the resulting account will facilitate development of corresponding accounts of belief revision. We show that, when the background logic does not satisfy compactness, as long as the language in question is closed under classical negation and disjunction, AGM style belief contraction functions (with appropriate adjustments) can be constructed. We provide such a constructive account of belief contraction that is characterised exactly by the eight AGM postulates of belief contraction. The primary difference between the classical AGM construction of belief contraction functions and the one presented here is that while the former employs remainders of the belief being removed, we use its complements.

Parametrised Difference Revision

AAAI Conferences

Despite the great theoretical advancements in the area of Belief Revision, there has been limited success in terms of implementations. One of the hurdles in implementing revision operators is that their specification (let alone their computation), requires substantial resources. On the other hand, implementing a specific revision operator, like Dalal's operator, would be of limited use. In a recent paper we generalised Dalal's construction defining a whole family of concrete revision operators, called Parametrised Difference revision operators or PD operators for short. This family is wide enough to cover a whole range of different applications, and at the same time it is easy to represent. In this paper we characterise axiomatically the family of PD operators, study its computational complexity, and discuss its benefits for belief revision implementations.

Specifying Plausibility Levels for Iterated Belief Change in the Situation Calculus

AAAI Conferences

We investigate augmenting a theory of belief and actions with qualitative plausibility levels. Shapiro et al. created a framework for modeling iterated belief revision and update which integrated those features with the well-developed theory of action in the situation calculus. However, applying their technique requires associating plausibility levels with initial situations, for which no very convenient mechanism had been proposed. Schwering and Lakemeyer proposed deriving these initial plausibility levels from a set of conditionals, similarly to how models are ranked in Pearl's System Z. However, their approach inherits some limitations of System Z. We consider alternatives, and argue that a perspicuous approach is to measure plausibility by counting the abnormalities in a situation (similarly to cardinality-based circumscription). By allowing abnormalities to change over time, we can also model changing plausibility levels in a natural and simple way, which gives us a flexible approach for handling belief change about predicted and unpredicted exogenous actions.