Belief Revision


Dependence in Propositional Logic: Formula-Formula Dependence and Formula Forgetting -- Application to Belief Update and Conservative Extension

arXiv.org Artificial Intelligence

Dependence is an important concept for many tasks in artificial intelligence. A task can be executed more efficiently by discarding something independent from the task. In this paper, we propose two novel notions of dependence in propositional logic: formula-formula dependence and formula forgetting. The first is a relation between formulas capturing whether a formula depends on another one, while the second is an operation that returns the strongest consequence independent of a formula. We also apply these two notions in two well-known issues: belief update and conservative extension. Firstly, we define a new update operator based on formula-formula dependence. Furthermore, we reduce conservative extension to formula forgetting.


Integrating Human-Provided Information Into Belief State Representation Using Dynamic Factorization

arXiv.org Artificial Intelligence

In partially observed environments, it can be useful for a human to provide the robot with declarative information that represents probabilistic relational constraints on properties of objects in the world, augmenting the robot's sensory observations. For instance, a robot tasked with a search-and-rescue mission may be informed by the human that two victims are probably in the same room. An important question arises: how should we represent the robot's internal knowledge so that this information is correctly processed and combined with raw sensory information? In this paper, we provide an efficient belief state representation that dynamically selects an appropriate factoring, combining aspects of the belief when they are correlated through information and separating them when they are not. This strategy works in open domains, in which the set of possible objects is not known in advance, and provides significant improvements in inference time over a static factoring, leading to more efficient planning for complex partially observed tasks. We validate our approach experimentally in two open-domain planning problems: a 2D discrete gridworld task and a 3D continuous cooking task.


Modeling Belief Change on Epistemic States

AAAI Conferences

Belief revision always results in trusting new evidence, so it may admit an unreliable one and discard a more confident one. We therefore use belief change instead of belief revision to remedy this weakness. By introducing epistemic states, we take into account of the strength of evidence that influences the change of belief. In this paper, we present a set of postulates to characterize belief change by epistemic states and establish representation theorems to characterize those postulates. We show that from an epistemic state, a corresponding ordinal conditional function by Spohn can be derived and the result of combining two epistemic states is thus reduced to the result from combining two corresponding ordinal conditional functions proposed by Laverny and Lang. Furthermore, when reduced to the belief revision situation, we prove that our results induce all the Darwiche and Pearl's postulates.


Morphologic for knowledge dynamics: revision, fusion, abduction

arXiv.org Artificial Intelligence

Several tasks in artificial intelligence require to be able to find models about knowledge dynamics. They include belief revision, fusion and belief merging, and abduction. In this paper we exploit the algebraic framework of mathematical morphology in the context of propositional logic, and define operations such as dilation or erosion of a set of formulas. We derive concrete operators, based on a semantic approach, that have an intuitive interpretation and that are formally well behaved, to perform revision, fusion and abduction. Computation and tractability are addressed, and simple examples illustrate the typical results that can be obtained.


In Praise of Belief Bases: Doing Epistemic Logic Without Possible Worlds

AAAI Conferences

We introduce a new semantics for a logic of explicit and implicit beliefs based on the concept of multi-agent belief base. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and doxastic/epistemic alternative are primitive, in our semantics they are non-primitive but are defined from the concept of belief base. We provide a complete axiomatization and a decidability result for our logic.


Dependence in Propositional Logic: Formula-Formula Dependence and Formula Forgetting – Application to Belief Update and Conservative Extension

AAAI Conferences

Dependence is an important concept for many tasks in artificial intelligence. A task can be executed more efficiently by discarding something independent from the task. In this paper, we propose two novel notions of dependence in propositional logic: formula-formula dependence and formula forgetting. The first is a relation between formulas capturing whether a formula depends on another one, while the second is an operation that returns the strongest consequence independent of a formula. We also apply these two notions in two well-known issues: belief update and conservative extension. Firstly, we define a new update operator based on formula-formula dependence. Furthermore, we reduce conservative extension to formula forgetting.


Book Reviews

AI Magazine

Conceptual Spaces--The Geometry of Thought is a book by Peter Gärdenfors, professor of cognitive science at Lund University, Sweden. Gärdenfors has authored another book in this series (based on work with Carlos Alchourron and David Makinson), Knowledge in Flux, a definitive account of the widely examined AGM (after Alchourron, Gärdenfors, and Makinson) theory of belief revision. The AGM theory is firmly based on classical logic and its model theory, and by his founding participation in developing it, Gärdenfors has earned the right to critique knowledge representation. His new book is not primarily about logic, but it is certainly not an apostasy either. If I may be permitted a minor irreverence, I would say that this book came not to destroy logic but to fulfill.


Book Review

AI Magazine

The idea is that although an AI system without the frame problem might, say, read an echocardiogram and diagnose a heart defect, a really smart autonomous robot will arrive only if, like us humans, it can handle the frame problem. The highlight … is an entertaining go-round between two pugilists trading blows in civil but gloves-off style, reminiscent of a net discussion. We're still confronted by a difficult question: Is there a solution to it? If not, then R2D2 might forever be but a creature of fiction. If, however, the frame problem is solvable, we must confront yet another question: Is there a general solution to the frame problem, or is the best that can be mustered a so-called domain-dependent solution?


The 2005 AAAI Classic Paper Awards

AI Magazine

Haussler's paper was therefore important in linking the new PAC learning theory work with the ongoing work on machine learning within AI. Twenty years later that link is firmly established, and the two research communities have largely merged into one. In fact, much of the dramatic progress in machine learning over the past two decades has come from a fruitful marriage between research on learning theory and design of practical learning algorithms for particular problem classes. Mitchell and Levesque provide commentary on the two AAAI Classic Paper awards, given at the AAAI-05 conference in Pittsburgh, Pennsylvania. The two winning papers were "Quantifying the Inductive Bias in Concept Learning," by David Haussler, and "Default Reasoning, Nonmonotonic Logics, and the Frame Problem," by Steve Hanks and Drew Mc-Dermott.


Convergence analysis of belief propagation for pairwise linear Gaussian models

arXiv.org Machine Learning

Gaussian belief propagation (BP) has been widely used for distributed inference in large-scale networks such as the smart grid, sensor networks, and social networks, where local measurements/observations are scattered over a wide geographical area. One particular case is when two neighboring agents share a common observation. For example, to estimate voltage in the direct current (DC) power flow model, the current measurement over a power line is proportional to the voltage difference between two neighboring buses. When applying the Gaussian BP algorithm to this type of problem, the convergence condition remains an open issue. In this paper, we analyze the convergence properties of Gaussian BP for this pairwise linear Gaussian model. We show analytically that the updating information matrix converges at a geometric rate to a unique positive definite matrix with arbitrary positive semidefinite initial value and further provide the necessary and sufficient convergence condition for the belief mean vector to the optimal estimate.