Goto

Collaborating Authors

Belief Revision: Instructional Materials


A Conditional Perspective on the Logic of Iterated Belief Contraction

arXiv.org Artificial Intelligence

In this article, we consider iteration principles for contraction, with the goal of identifying properties for contractions that respect conditional beliefs. Therefore, we investigate and evaluate four groups of iteration principles for contraction which consider the dynamics of conditional beliefs. For all these principles, we provide semantic characterization theorems and provide formulations by postulates which highlight how the change of beliefs and of conditional beliefs is constrained, whenever that is possible. The first group is similar to the syntactic Darwiche-Pearl postulates. As a second group, we consider semantic postulates for iteration of contraction by Chopra, Ghose, Meyer and Wong, and by Konieczny and Pino P\'erez, respectively, and we provide novel syntactic counterparts. Third, we propose a contraction analogue of the independence condition by Jin and Thielscher. For the fourth group, we consider natural and moderate contraction by Nayak. Methodically, we make use of conditionals for contractions, so-called contractionals and furthermore, we propose and employ the novel notion of $ \alpha $-equivalence for formulating some of the new postulates.


Binary Diffing as a Network Alignment Problem via Belief Propagation

arXiv.org Artificial Intelligence

In this paper, we address the problem of finding a correspondence, or matching, between the functions of two programs in binary form, which is one of the most common task in binary diffing. We introduce a new formulation of this problem as a particular instance of a graph edit problem over the call graphs of the programs. In this formulation, the quality of a mapping is evaluated simultaneously with respect to both function content and call graph similarities. We show that this formulation is equivalent to a network alignment problem. We propose a solving strategy for this problem based on max-product belief propagation. Finally, we implement a prototype of our method, called QBinDiff, and propose an extensive evaluation which shows that our approach outperforms state of the art diffing tools.


Uncertainty measures: The big picture

arXiv.org Artificial Intelligence

Probability theory is far from being the most general mathematical theory of uncertainty. A number of arguments point at its inability to describe second-order ('Knightian') uncertainty. In response, a wide array of theories of uncertainty have been proposed, many of them generalisations of classical probability. As we show here, such frameworks can be organised into clusters sharing a common rationale, exhibit complex links, and are characterised by different levels of generality. Our goal is a critical appraisal of the current landscape in uncertainty theory.


Credibility-limited Base Revision: New Classes and Their Characterizations

Journal of Artificial Intelligence Research

In this paper we study a kind of operator --known as credibility-limited base revisions-- which addresses two of the main issues that have been pointed out to the AGM model of belief change. Indeed, on the one hand, these operators are defined on belief bases (rather than belief sets) and, on the other hand, they are constructed with the underlying idea that not all new information is accepted. We propose twenty different classes of credibilitylimited base revision operators and obtain axiomatic characterizations for each of them. Additionally we thoroughly investigate the interrelations (in the sense of inclusion) among all those classes. More precisely, we analyse whether each one of those classes is or is not (strictly) contained in each of the remaining ones.


Recursive Experts: An Efficient Optimal Mixture of Learning Systems in Dynamic Environments

arXiv.org Machine Learning

Sequential learning systems are used in a wide variety of problems from decision making to optimization, where they provide a 'belief' (opinion) to nature, and then update this belief based on the feedback (result) to minimize (or maximize) some cost or loss (conversely, utility or gain). The goal is to reach an objective by exploiting the temporal relation inherent to the nature's feedback (state). By exploiting this relation, specific learning systems can be designed that perform asymptotically optimal for various applications. However, if the framework of the problem is not stationary, i.e., the nature's state sometimes changes arbitrarily, the past cumulative belief revision done by the system may become useless and the system may fail if it lacks adaptivity. While this adaptivity can be directly implemented in specific cases (e.g., convex optimization), it is mostly not straightforward for general learning tasks. To this end, we propose an efficient optimal mixture framework for general sequential learning systems, which we call the recursive experts for dynamic environments. For this purpose, we design hyper-experts that incorporate the learning systems at our disposal and recursively merge in a specific way to achieve minimax optimal regret bounds up to constant factors. The multiplicative increases in computational complexity from the initial system to our adaptive system are only logarithmic-in-time factors.


$\alpha$ Belief Propagation for Approximate Inference

arXiv.org Machine Learning

Belief propagation (BP) algorithm is a widely used message-passing method for inference in graphical models. BP on loop-free graphs converges in linear time. But for graphs with loops, BP's performance is uncertain, and the understanding of its solution is limited. To gain a better understanding of BP in general graphs, we derive an interpretable belief propagation algorithm that is motivated by minimization of a localized $\alpha$-divergence. We term this algorithm as $\alpha$ belief propagation ($\alpha$-BP). It turns out that $\alpha$-BP generalizes standard BP. In addition, this work studies the convergence properties of $\alpha$-BP. We prove and offer the convergence conditions for $\alpha$-BP. Experimental simulations on random graphs validate our theoretical results. The application of $\alpha$-BP to practical problems is also demonstrated.


Goal Recognition over Imperfect Domain Models

arXiv.org Artificial Intelligence

Goal recognition is the problem of recognizing the intended goal of autonomous agents or humans by observing their behavior in an environment. Over the past years, most existing approaches to goal and plan recognition have been ignoring the need to deal with imperfections regarding the domain model that formalizes the environment where autonomous agents behave. In this thesis, we introduce the problem of goal recognition over imperfect domain models, and develop solution approaches that explicitly deal with two distinct types of imperfect domains models: (1) incomplete discrete domain models that have possible, rather than known, preconditions and effects in action descriptions; and (2) approximate continuous domain models, where the transition function is approximated from past observations and not well-defined. We develop novel goal recognition approaches over imperfect domains models by leveraging and adapting existing recognition approaches from the literature. Experiments and evaluation over these two types of imperfect domains models show that our novel goal recognition approaches are accurate in comparison to baseline approaches from the literature, at several levels of observability and imperfections.


Visions of a generalized probability theory

arXiv.org Artificial Intelligence

In this Book we argue that the fruitful interaction of computer vision and belief calculus is capable of stimulating significant advances in both fields. From a methodological point of view, novel theoretical results concerning the geometric and algebraic properties of belief functions as mathematical objects are illustrated and discussed in Part II, with a focus on both a perspective 'geometric approach' to uncertainty and an algebraic solution to the issue of conflicting evidence. In Part III we show how these theoretical developments arise from important computer vision problems (such as articulated object tracking, data association and object pose estimation) to which, in turn, the evidential formalism is able to provide interesting new solutions. Finally, some initial steps towards a generalization of the notion of total probability to belief functions are taken, in the perspective of endowing the theory of evidence with a complete battery of estimation and inference tools to the benefit of all scientists and practitioners.


Belief Integration and Source Reliability Assessment

Journal of Artificial Intelligence Research

Merging beliefs requires the plausibility of the sources of the information to be merged. They are typically assumed equally reliable when nothing suggests otherwise. A recent line of research has spun from the idea of deriving this information from the revision process itself. In particular, the history of previous revisions and previous merging examples provide information for performing subsequent merging operations. Yet, no examples or previous revisions may be available. In spite of the apparent lack of information, something can still be inferred by a try-and-check approach: a relative reliability ordering is assumed, the sources are integrated according to it and the result is compared with the original information. The final check may contradict the original ordering, like when the result of merging implies the negation of a formula coming from a source initially assumed reliable, or it implies a formula coming from a source assumed unreliable. In such cases, the reliability ordering assumed in the first place can be excluded from consideration. Such a scenario is proved real under the classifications of source reliability and definitions of belief integration considered in this article: sources divided in two, three or multiple reliability classes; integration is mostly by maximal consistent subsets but also weighted distance is considered.


Preference-Based Inconsistency Management in Multi-Context Systems

Journal of Artificial Intelligence Research

Multi-Context Systems (MCS) are a powerful framework for interlinking possibly heterogeneous, autonomous knowledge bases, where information can be exchanged among knowledge bases by designated bridge rules with negation as failure. An acknowledged issue with MCS is inconsistency that arises due to the information exchange. To remedy this problem, inconsistency removal has been proposed in terms of repairs, which modify bridge rules based on suitable notions for diagnosis of inconsistency. In general, multiple diagnoses and repairs do exist; this leaves the user, who arguably may oversee the inconsistency removal, with the task of selecting some repair among all possible ones. To aid in this regard, we extend the MCS framework with preference information for diagnoses, such that undesired diagnoses are filtered out and diagnoses that are most preferred according to a preference ordering are selected. We consider preference information at a generic level and develop meta-reasoning techniques on diagnoses in MCS that can be exploited to reduce preference-based selection of diagnoses to computing ordinary subset-minimal diagnoses in an extended MCS. We describe two meta-reasoning encodings for preference orders: the first is conceptually simple but may incur an exponential blowup. The second is increasing only linearly in size and based on duplicating the original MCS. The latter requires nondeterministic guessing if a subset-minimal among all most preferred diagnoses should be computed. However, a complexity analysis of diagnoses shows that this is worst-case optimal, and that in general, preferred diagnoses have the same complexity as subset-minimal ordinary diagnoses. Furthermore, (subset-minimal) filtered diagnoses and (subset-minimal) ordinary diagnoses also have the same complexity.