Belief Revision


$\alpha$ Belief Propagation as Fully Factorized Approximation

arXiv.org Machine Learning

Belief propagation (BP) can do exact inference in loop-free graphs, but its performance could be poor in graphs with loops, and the understanding of its solution is limited. This work gives an interpretable belief propagation rule that is actually minimization of a localized $\alpha$-divergence. We term this algorithm as $\alpha$ belief propagation ($\alpha$-BP). The performance of $\alpha$-BP is tested in MAP (maximum a posterior) inference problems, where $\alpha$-BP can outperform (loopy) BP by a significant margin even in fully-connected graphs.


Learning Probabilities: Towards a Logic of Statistical Learning

arXiv.org Artificial Intelligence

We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of 'radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a 'plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning.


A Conceptually Well-Founded Characterization of Iterated Admissibility Using an "All I Know" Operator

arXiv.org Artificial Intelligence

Brandenburger, Friedenberg, and Keisler provide an epistemic characterization of iterated admissibility (IA), also known as iterated deletion of weakly dominated strategies, where uncertainty is represented using LPSs (lexicographic probability sequences). Their characterization holds in a rich structure called a complete structure, where all types are possible. In earlier work, we gave a characterization of iterated admissibility using an "all I know" operator, that captures the intuition that "all the agent knows" is that agents satisfy the appropriate rationality assumptions. That characterization did not need complete structures and used probability structures, not LPSs. However, that characterization did not deal with Samuelson's conceptual concern regarding IA, namely, that at higher levels, players do not consider possible strategies that were used to justify their choice of strategy at lower levels. In this paper, we give a characterization of IA using the all I know operator that does deal with Samuelson's concern. However, it uses LPSs. We then show how to modify the characterization using notions of "approximate belief" and "approximately all I know" so as to deal with Samuelson's concern while still working with probability structures.


Elementary Iterated Revision and the Levi Identity

arXiv.org Artificial Intelligence

Recent work has considered the problem of extending to the case of iterated belief change the so-called `Harper Identity' (HI), which defines single-shot contraction in terms of single-shot revision. The present paper considers the prospects of providing a similar extension of the Levi Identity (LI), in which the direction of definition runs the other way. We restrict our attention here to the three classic iterated revision operators--natural, restrained and lexicographic, for which we provide here the first collective characterisation in the literature, under the appellation of `elementary' operators. We consider two prima facie plausible ways of extending (LI). The first proposal involves the use of the rational closure operator to offer a `reductive' account of iterated revision in terms of iterated contraction. The second, which doesn't commit to reductionism, was put forward some years ago by Nayak et al. We establish that, for elementary revision operators and under mild assumptions regarding contraction, Nayak's proposal is equivalent to a new set of postulates formalising the claim that contraction by $\neg A$ should be considered to be a kind of `mild' revision by $A$. We then show that these, in turn, under slightly weaker assumptions, jointly amount to the conjunction of a pair of constraints on the extension of (HI) that were recently proposed in the literature. Finally, we consider the consequences of endorsing both suggestions and show that this would yield an identification of rational revision with natural revision. We close the paper by discussing the general prospects for defining iterated revision in terms of iterated contraction.


Exploring the Role of Prior Beliefs for Argument Persuasion

arXiv.org Artificial Intelligence

Public debate forums provide a common platform for exchanging opinions on a topic of interest. While recent studies in natural language processing (NLP) have provided empirical evidence that the language of the debaters and their patterns of interaction play a key role in changing the mind of a reader, research in psychology has shown that prior beliefs can affect our interpretation of an argument and could therefore constitute a competing alternative explanation for resistance to changing one's stance. To study the actual effect of language use vs. prior beliefs on persuasion, we provide a new dataset and propose a controlled setting that takes into consideration two reader level factors: political and religious ideology. We find that prior beliefs affected by these reader level factors play a more important role than language use effects and argue that it is important to account for them in NLP studies of persuasion.


Accuracy-Memory Tradeoffs and Phase Transitions in Belief Propagation

arXiv.org Machine Learning

The analysis of Belief Propagation and other algorithms for the {\em reconstruction problem} plays a key role in the analysis of community detection in inference on graphs, phylogenetic reconstruction in bioinformatics, and the cavity method in statistical physics. We prove a conjecture of Evans, Kenyon, Peres, and Schulman (2000) which states that any bounded memory message passing algorithm is statistically much weaker than Belief Propagation for the reconstruction problem. More formally, any recursive algorithm with bounded memory for the reconstruction problem on the trees with the binary symmetric channel has a phase transition strictly below the Belief Propagation threshold, also known as the Kesten-Stigum bound. The proof combines in novel fashion tools from recursive reconstruction, information theory, and optimal transport, and also establishes an asymptotic normality result for BP and other message-passing algorithms near the critical threshold.


Decrement Operators in Belief Change

arXiv.org Artificial Intelligence

While research on iterated revision is predominant in the field of iterated belief change, the class of iterated contraction operators received more attention in recent years. In this article, we examine a non-prioritized generalisation of iterated contraction. In particular, the class of weak decrement operators is introduced, which are operators that by multiple steps achieve the same as a contraction. Inspired by Darwiche and Pearl's work on iterated revision the subclass of decrement operators is defined. For both, decrement and weak decrement operators, postulates are presented and for each of them a representation theorem in the framework of total preorders is given. Furthermore, we present two types of decrement operators which have a unique representative.


Markov versus quantum dynamic models of belief change during evidence monitoring

arXiv.org Artificial Intelligence

Two different dynamic models for belief change during evidence monitoring were evaluated: Markov and quantum. They were empirically tested with an experiment in which participants monitored evidence for an initial period of time, made a probability rating, then monitored more evidence, before making a second rating. The models were qualitatively tested by manipulating the time intervals in a manner that provided a test for interference effects of the first rating on the second. The Markov model predicted no interference whereas the quantum model predicted interference. A quantitative comparison of the two models was also carried out using a generalization criterion method: the parameters were fit to data from one set of time intervals, and then these same parameters were used to predict data from another set of time intervals. The results indicated that some features of both Markov and quantum models are needed to accurately account for the results.


Penalty Logic-Based Representation of C-Revision

AAAI Conferences

Belief revision consists in modifying an epistemic state in the light of a new information. In this paper, we focus on the so-called multiple iterated belief revision process called C-revision. Epistemic states are represented in terms of Ordinal Conditional Functions OCF and penalty knowledge bases. The input is a set of consistent weighted formulas. We show that C-revision, defined at a semantic level using OCF, has a very natural counterpart in penalty logic.


Axiomatic Evaluation of Epistemic Forgetting Operators

AAAI Conferences

Forgetting as a knowledge management operation has received much less attention than operations like inference, or revision. It was mainly in the area of logic programming that techniques and axiomatic properties have been studied systematically. However, at least from a cognitive view, forgetting plays an important role in restructuring and reorganizing a human's mind, and it is closely related to notions like relevance and independence which are crucial to knowledge representation and reasoning. In this paper, we propose axiomatic properties of (intentional) forgetting for general epistemic frameworks which are inspired by those for logic programming, and we evaluate various forgetting operations which have been proposed recently by Beierle et al. according to them. The general aim of this paper is to advance formal studies of (intentional) forgetting operators while capturing the many facets of forgetting in a unifying framework in which different forgetting operators can be contrasted and distinguished by means of formal properties.