Goto

Collaborating Authors

 Belief Revision


Belief Propagation and Loop Series on Planar Graphs

arXiv.org Artificial Intelligence

We discuss a generic model of Bayesian inference with binary variables defined on edges of a planar graph. The Loop Calculus approach of [1, 2] is used to evaluate the resulting series expansion for the partition function. We show that, for planar graphs, truncating the series at single-connected loops reduces, via a map reminiscent of the Fisher transformation [3], to evaluating the partition function of the dimer matching model on an auxiliary planar graph. Thus, the truncated series can be easily re-summed, using the Pfaffian formula of Kasteleyn [4]. This allows to identify a big class of computationally tractable planar models reducible to a dimer model via the Belief Propagation (gauge) transformation. The Pfaffian representation can also be extended to the full Loop Series, in which case the expansion becomes a sum of Pfaffian contributions, each associated with dimer matchings on an extension to a subgraph of the original graph. Algorithmic consequences of the Pfaffian representation, as well as relations to quantum and non-planar models, are discussed.


Using Combinatorial Optimization within Max-Product Belief Propagation

Neural Information Processing Systems

In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random field (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very efficiently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials.


Using Combinatorial Optimization within Max-Product Belief Propagation

Neural Information Processing Systems

In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random eld (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very ef ciently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials.


A Spectral Approach to Analyzing Belief Propagation for 3-Coloring

arXiv.org Artificial Intelligence

Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ``planted'' solution; thus, we obtain the first rigorous result on BP for graph coloring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how Belief Propagation breaks the symmetry between the $3!$ possible permutations of the color classes.


Semantic results for ontic and epistemic change

arXiv.org Artificial Intelligence

We give some semantic results for an epistemic logic incorporating dynamic operators to describe information changing events. Such events include epistemic changes, where agents become more informed about the non-changing state of the world, and ontic changes, wherein the world changes. The events are executed in information states that are modeled as pointed Kripke models. Our contribution consists of three semantic results. (i) Given two information states, there is an event transforming one into the other. The linguistic correspondent to this is that every consistent formula can be made true in every information state by the execution of an event. (ii) A more technical result is that: every event corresponds to an event in which the postconditions formalizing ontic change are assignments to `true' and `false' only (instead of assignments to arbitrary formulas in the logical language). `Corresponds' means that execution of either event in a given information state results in bisimilar information states. (iii) The third, also technical, result is that every event corresponds to a sequence of events wherein all postconditions are assignments of a single atom only (instead of simultaneous assignments of more than one atom).


Discriminated Belief Propagation

arXiv.org Artificial Intelligence

Near optimal decoding of good error control codes is generally a difficult task. However, for a certain type of (sufficiently) good codes an efficient decoding algorithm with near optimal performance exists. These codes are defined via a combination of constituent codes with low complexity trellis representations. Their decoding algorithm is an instance of (loopy) belief propagation and is based on an iterative transfer of constituent beliefs. The beliefs are thereby given by the symbol probabilities computed in the constituent trellises. Even though weak constituent codes are employed close to optimal performance is obtained, i.e., the encoder/decoder pair (almost) achieves the information theoretic capacity. However, (loopy) belief propagation only performs well for a rather specific set of codes, which limits its applicability. In this paper a generalisation of iterative decoding is presented. It is proposed to transfer more values than just the constituent beliefs. This is achieved by the transfer of beliefs obtained by independently investigating parts of the code space. This leads to the concept of discriminators, which are used to improve the decoder resolution within certain areas and defines discriminated symbol beliefs. It is shown that these beliefs approximate the overall symbol probabilities. This leads to an iteration rule that (below channel capacity) typically only admits the solution of the overall decoding problem. Via a Gauss approximation a low complexity version of this algorithm is derived. Moreover, the approach may then be applied to a wide range of channel maps without significant complexity increase.


Qualitative Belief Conditioning Rules (QBCR)

arXiv.org Artificial Intelligence

In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem.


Remarks on Inheritance Systems

arXiv.org Artificial Intelligence

We try a conceptual analysis of inheritance diagrams, first in abstract terms, and then compare to "normality" and the "small/big sets" of preferential and related reasoning. The main ideas are about nodes as truth values and information sources, truth comparison by paths, accessibility or relevance of information by paths, relative normality, and prototypical reasoning.


Equivalence of LP Relaxation and Max-Product for Weighted Matching in General Graphs

arXiv.org Artificial Intelligence

Max-product belief propagation is a local, iterative algorithm to find the mode/MAP estimate of a probability distribution. While it has been successfully employed in a wide variety of applications, there are relatively few theoretical guarantees of convergence and correctness for general loopy graphs that may have many short cycles. Of these, even fewer provide exact ``necessary and sufficient'' characterizations. In this paper we investigate the problem of using max-product to find the maximum weight matching in an arbitrary graph with edge weights. This is done by first constructing a probability distribution whose mode corresponds to the optimal matching, and then running max-product. Weighted matching can also be posed as an integer program, for which there is an LP relaxation. This relaxation is not always tight. In this paper we show that \begin{enumerate} \item If the LP relaxation is tight, then max-product always converges, and that too to the correct answer. \item If the LP relaxation is loose, then max-product does not converge. \end{enumerate} This provides an exact, data-dependent characterization of max-product performance, and a precise connection to LP relaxation, which is a well-studied optimization technique. Also, since LP relaxation is known to be tight for bipartite graphs, our results generalize other recent results on using max-product to find weighted matchings in bipartite graphs.


Metacognition in SNePS

AI Magazine

The SNePS knowledge representation, reasoning, and acting system has several features that facilitate metacognition in SNePS-based agents. The most prominent is the fact that propositions are represented in SNePS as terms rather than as sentences, so that propositions can occur as argu- ments of propositions and other expressions without leaving first-order logic. The SNePS acting subsystem is integrated with the SNePS reasoning subsystem in such a way that: there are acts that affect what an agent believes; there are acts that specify knowledge-contingent acts and lack-of-knowledge acts; there are policies that serve as "daemons," triggering acts when certain propositions are believed or wondered about. The GLAIR agent architecture supports metacognition by specifying a location for the source of self-awareness and of a sense of situatedness in the world. Several SNePS-based agents have taken advantage of these facilities to engage in self-awareness and metacognition.