Belief Revision


Factored Probabilistic Belief Tracking

arXiv.org Artificial Intelligence

The problem of belief tracking in the presence of stochastic actions and observations is pervasive and yet computationally intractable. In this work we show however that probabilistic beliefs can be maintained in factored form exactly and efficiently across a number of causally closed beams, when the state variables that appear in more than one beam obey a form of backward determinism . Since computing marginals from the factors is still computationally intractable in general, and variables appearing in several beams are not always backward-deterministic, the basic formulation is extended with two approximations: forms of belief propagation for computing marginals from factors, and sampling of non-backward-deterministic variables for making such variables backward-deterministic given their sampled history. Unlike, Rao-Blackwellized particle-filtering, the sampling is not used for making inference tractable but for making the factorization sound . The resulting algorithm involves sampling and belief propagation or just one of them as determined by the structure of the model.


Message Scheduling for Performant, Many-Core Belief Propagation

arXiv.org Artificial Intelligence

--Belief Propagation (BP) is a message-passing algorithm for approximate inference over Probabilistic Graphical Models (PGMs), finding many applications such as computer vision, error-correcting codes, and protein-folding. While general, the convergence and speed of the algorithm has limited its practical use on difficult inference problems. As an algorithm that is highly amenable to parallelization, many-core Graphical Processing Units (GPUs) could significantly improve BP performance. Improving BP through many-core systems is nontrivial: the scheduling of messages in the algorithm strongly affects performance. We present a study of message scheduling for BP on GPUs. We demonstrate that BP exhibits a tradeoff between speed and convergence based on parallelism and show that existing message schedulings are not able to utilize this tradeoff. T o this end, we present a novel randomized message scheduling approach, Randomized BP (RnBP), which outperforms existing methods on the GPU. I NTRODUCTION Probabilistic Graphical Models (PGMs) are powerful, general machine learning models that encode distributions over random variables. PGM Inference, in which we seek to compute some probabilistic beliefs within the system modeled by the PGM, is in general an intractable problem, leading to dependence on approximate algorithms. Belief Propagation (BP) is a widely employed approximate inference algorithms for PGMs [1].


Memory Management in Resource-Bounded Agents

arXiv.org Artificial Intelligence

Memory in an agent system is a process of reasoning: it is the l earning process of strengthening a concept. The interaction between an agent and the environment can pla y an important role in constructing its memory and may affect its future behaviour. In fact, through memory an agent is potentially able to recall and to learn from experiences so that its beliefs and i ts future course of action are grounded in these experiences. In computational logic, [2] introduces DLEK (Dynamic Logic of Explicit beliefs and Knowledge) as a logical formalization of the short-term and long-term memory. The underlying idea is to represent reasoning about the formation of beliefs throu gh perception and inference in non-omniscient resource-bounded agents. DLEK has however no notion of time, while agents' actual perceptions are inherently timed and so are many of the inferences drawn from such perceptions. In this paper we present an extension of LEK/DLEK to T-LEK/T-DLEK ("Timed LE K" and "Timed DLEK") obtained by introducing a special function which associates to each b elief the arrival time and controls timed inferences. Through this function it is easier to keep the ev olution of the surrounding world under control and the representation is more complete. This abstr act is an evolution version of [3], where we have introduced explicit time instants and time intervals i n formulas, and it is extracted from [4].


$\alpha$ Belief Propagation as Fully Factorized Approximation

arXiv.org Machine Learning

Belief propagation (BP) can do exact inference in loop-free graphs, but its performance could be poor in graphs with loops, and the understanding of its solution is limited. This work gives an interpretable belief propagation rule that is actually minimization of a localized $\alpha$-divergence. We term this algorithm as $\alpha$ belief propagation ($\alpha$-BP). The performance of $\alpha$-BP is tested in MAP (maximum a posterior) inference problems, where $\alpha$-BP can outperform (loopy) BP by a significant margin even in fully-connected graphs.


Learning Probabilities: Towards a Logic of Statistical Learning

arXiv.org Artificial Intelligence

We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of 'radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a 'plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning.


A Conceptually Well-Founded Characterization of Iterated Admissibility Using an "All I Know" Operator

arXiv.org Artificial Intelligence

Brandenburger, Friedenberg, and Keisler provide an epistemic characterization of iterated admissibility (IA), also known as iterated deletion of weakly dominated strategies, where uncertainty is represented using LPSs (lexicographic probability sequences). Their characterization holds in a rich structure called a complete structure, where all types are possible. In earlier work, we gave a characterization of iterated admissibility using an "all I know" operator, that captures the intuition that "all the agent knows" is that agents satisfy the appropriate rationality assumptions. That characterization did not need complete structures and used probability structures, not LPSs. However, that characterization did not deal with Samuelson's conceptual concern regarding IA, namely, that at higher levels, players do not consider possible strategies that were used to justify their choice of strategy at lower levels. In this paper, we give a characterization of IA using the all I know operator that does deal with Samuelson's concern. However, it uses LPSs. We then show how to modify the characterization using notions of "approximate belief" and "approximately all I know" so as to deal with Samuelson's concern while still working with probability structures.


Elementary Iterated Revision and the Levi Identity

arXiv.org Artificial Intelligence

Recent work has considered the problem of extending to the case of iterated belief change the so-called `Harper Identity' (HI), which defines single-shot contraction in terms of single-shot revision. The present paper considers the prospects of providing a similar extension of the Levi Identity (LI), in which the direction of definition runs the other way. We restrict our attention here to the three classic iterated revision operators--natural, restrained and lexicographic, for which we provide here the first collective characterisation in the literature, under the appellation of `elementary' operators. We consider two prima facie plausible ways of extending (LI). The first proposal involves the use of the rational closure operator to offer a `reductive' account of iterated revision in terms of iterated contraction. The second, which doesn't commit to reductionism, was put forward some years ago by Nayak et al. We establish that, for elementary revision operators and under mild assumptions regarding contraction, Nayak's proposal is equivalent to a new set of postulates formalising the claim that contraction by $\neg A$ should be considered to be a kind of `mild' revision by $A$. We then show that these, in turn, under slightly weaker assumptions, jointly amount to the conjunction of a pair of constraints on the extension of (HI) that were recently proposed in the literature. Finally, we consider the consequences of endorsing both suggestions and show that this would yield an identification of rational revision with natural revision. We close the paper by discussing the general prospects for defining iterated revision in terms of iterated contraction.


Exploring the Role of Prior Beliefs for Argument Persuasion

arXiv.org Artificial Intelligence

Public debate forums provide a common platform for exchanging opinions on a topic of interest. While recent studies in natural language processing (NLP) have provided empirical evidence that the language of the debaters and their patterns of interaction play a key role in changing the mind of a reader, research in psychology has shown that prior beliefs can affect our interpretation of an argument and could therefore constitute a competing alternative explanation for resistance to changing one's stance. To study the actual effect of language use vs. prior beliefs on persuasion, we provide a new dataset and propose a controlled setting that takes into consideration two reader level factors: political and religious ideology. We find that prior beliefs affected by these reader level factors play a more important role than language use effects and argue that it is important to account for them in NLP studies of persuasion.


Accuracy-Memory Tradeoffs and Phase Transitions in Belief Propagation

arXiv.org Machine Learning

The analysis of Belief Propagation and other algorithms for the {\em reconstruction problem} plays a key role in the analysis of community detection in inference on graphs, phylogenetic reconstruction in bioinformatics, and the cavity method in statistical physics. We prove a conjecture of Evans, Kenyon, Peres, and Schulman (2000) which states that any bounded memory message passing algorithm is statistically much weaker than Belief Propagation for the reconstruction problem. More formally, any recursive algorithm with bounded memory for the reconstruction problem on the trees with the binary symmetric channel has a phase transition strictly below the Belief Propagation threshold, also known as the Kesten-Stigum bound. The proof combines in novel fashion tools from recursive reconstruction, information theory, and optimal transport, and also establishes an asymptotic normality result for BP and other message-passing algorithms near the critical threshold.


Decrement Operators in Belief Change

arXiv.org Artificial Intelligence

While research on iterated revision is predominant in the field of iterated belief change, the class of iterated contraction operators received more attention in recent years. In this article, we examine a non-prioritized generalisation of iterated contraction. In particular, the class of weak decrement operators is introduced, which are operators that by multiple steps achieve the same as a contraction. Inspired by Darwiche and Pearl's work on iterated revision the subclass of decrement operators is defined. For both, decrement and weak decrement operators, postulates are presented and for each of them a representation theorem in the framework of total preorders is given. Furthermore, we present two types of decrement operators which have a unique representative.