Goto

Collaborating Authors

 Belief Revision


MACHINE INTELLIGENCE 11

AI Classics

In this paper we will be concerned with such reasoning in its most general form, that is, in inferences that are defeasible: given more information, we may retract them. The purpose of this paper is to introduce a form of non-monotonic inference based on the notion of a partial model of the world. We take partial models to reflect our partial knowledge of the true state of affairs. We then define non-monotonic inference as the process of filling in unknown parts of the model with conjectures: statements that could turn out to be false, given more complete knowledge. To take a standard example from default reasoning: since most birds can fly, if Tweety is a bird it is reasonable to assume that she can fly, at least in the absence of any information to the contrary. We thus have some justification for filling in our partial picture of the world with this conjecture. If our knowledge includes the fact that Tweety is an ostrich, then no such justification exists, and the conjecture must be retracted.


Z.til

AI Classics

This paper describes some work on automatically generating finite counterexamples in topology, and the use of counterexamples to speed up proof discovery in intermediate analysis, and gives some examples theorems where human provers are aided in proof discovery by the use of examples.


RESIDUE A Deductive Approach to Design Synthesis

AI Classics

We present a new approach to deductive design synthesis, the Residue Approach, in which designs are represented as sets of constraints. Previous approaches, such as PROLOG [181 or the work of Manna and WaWinger [111, express designs as bindings on single terms. We give a complete and sound procedure for Ending sets of propositions constituting a legal design. The size of the search space of the procedure and the advantages and disadvantages of the Residue Approach are analysed. In particular we show how Residue can avoid backtracking caused by making design decisions of overly coarse granularity. In contrast, it is awkward for the single term approaches to do the same. In addition we give a rule for constraint propagation in deductive synthesis, and show its use in pruning the design space. Finally, Residue is related to other work, in particular, to Default Logic [16] and to Assumption-Based Truth Maintenance [1].



Approximate evaluation of marginal association probabilities with belief propagation

arXiv.org Artificial Intelligence

Data association, the problem of reasoning over correspondence between targets and measurements, is a fundamental problem in tracking. This paper presents a graphical model formulation of data association and applies an approximate inference method, belief propagation (BP), to obtain estimates of marginal association probabilities. We prove that BP is guaranteed to converge, and bound the number of iterations necessary. Experiments reveal a favourable comparison to prior methods in terms of accuracy and computational complexity.


Linearized and Single-Pass Belief Propagation

arXiv.org Artificial Intelligence

How can we tell when accounts are fake or real in a social network? And how can we tell which accounts belong to liberal, conservative or centrist users? Often, we can answer such questions and label nodes in a network based on the labels of their neighbors and appropriate assumptions of homophily ("birds of a feather flock together") or heterophily ("opposites attract"). One of the most widely used methods for this kind of inference is Belief Propagation (BP) which iteratively propagates the information from a few nodes with explicit labels throughout a network until convergence. One main problem with BP, however, is that there are no known exact guarantees of convergence in graphs with loops. This paper introduces Linearized Belief Propagation (LinBP), a linearization of BP that allows a closed-form solution via intuitive matrix equations and, thus, comes with convergence guarantees. It handles homophily, heterophily, and more general cases that arise in multi-class settings. Plus, it allows a compact implementation in SQL. The paper also introduces Single-pass Belief Propagation (SBP), a "localized" version of LinBP that propagates information across every edge at most once and for which the final class assignments depend only on the nearest labeled neighbors. In addition, SBP allows fast incremental updates in dynamic networks. Our runtime experiments show that LinBP and SBP are orders of magnitude faster than standard


Deep Learning-Based Goal Recognition in Open-Ended Digital Games

AAAI Conferences

While many open-ended digital games feature non-linear storylines and multiple solution paths, it is challenging for game developers to create effective game experiences in these settings due to the freedom given to the player. To address these challenges, goal recognition, a computational player-modeling task, has been investigated to enable digital games to dynamically predict playersโ€™ goals. This paper presents a goal recognition framework based on stacked denoising autoencoders, a variant of deep learning. The learned goal recognition models, which are trained from a corpus of player interactions, not only offer improved performance, but also offer the substantial advantage of eliminating the need for labor-intensive feature engineering. An evaluation demonstrates that the deep learning-based goal recognition framework significantly outperforms the previous state-of-the-art goal recognition approach based on Markov logic networks.


Belief revision by examples

arXiv.org Artificial Intelligence

A common assumption in belief revision is that the reliability of the information sources is either given, derived from temporal information, or the same for all. This article does not describe a new semantics for integration but the problem of obtaining the reliability of the sources given the result of a previous merging. As an example, the relative reliability of two sensors can be assessed given some certain observation, and allows for subsequent mergings of data coming from them.


Defining Relative Likelihood in Partially-Ordered Preferential Structures

arXiv.org Artificial Intelligence

Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis (1973) earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total preference order on worlds. Complications arise when examining partial orders that are not present for total orders. There are subtleties involving the exact approach to lifting the order on worlds to an order on sets of worlds. In addition, the axiomatization of the logic of relative likelihood in the case of partial orders gives insight into the connection between relative likelihood and default reasoning.


Modular Belief Updates and Confusion about Measures of Certainty in Artificial Intelligence Research

arXiv.org Artificial Intelligence

Over the last decade, there has been growing interest in the use or measures or change in belief for reasoning with uncertainty in artificial intelligence research. An important characteristic of several methodologies that reason with changes in belief or belief updates, is a property that we term modularity. We call updates that satisfy this property modular updates. Whereas probabilistic measures of belief update - which satisfy the modularity property were first discovered in the nineteenth century, knowledge and discussion of these quantities remains obscure in artificial intelligence research. We define modular updates and discuss their inappropriate use in two influential expert systems.