Iterated Belief Change Due to Actions and Observations

AAAI Conferences

In action domains where agents may have erroneous beliefs, reasoning about the effects of actions involves reasoning about belief change. In this paper, we use a transition system approach to reason about the evolution of an agent's beliefs as actions are executed. Some actions cause an agent to perform belief revision while others cause an agent to perform belief update, but the interaction between revision and update can be nonelementary. We present a set of rationality properties describing the interaction between revision and update, and we introduce a new class of belief change operators for reasoning about alternating sequences of revisions and updates. Our belief change operators can be characterized in terms of a natural shifting operation on total pre-orderings over interpretations. We compare our approach with related work on iterated belief change due to action, and we conclude with some directions for future research.


Iterated Belief Change Due to Actions and Observations

Journal of Artificial Intelligence Research

In action domains where agents may have erroneous beliefs, reasoning about the effects of actions involves reasoning about belief change. In this paper, we use a transition system approach to reason about the evolution of an agent's beliefs as actions are executed. Some actions cause an agent to perform belief revision while others cause an agent to perform belief update, but the interaction between revision and update can be non-elementary. We present a set of rationality properties describing the interaction between revision and update, and we introduce a new class of belief change operators for reasoning about alternating sequences of revisions and updates. Our belief change operators can be characterized in terms of a natural shifting operation on total pre-orderings over interpretations. We compare our approach with related work on iterated belief change due to action, and we conclude with some directions for future research.


Dependency-Directed Reconsideration Belief Base Optimization for Truth Maintenance Systems

AAAI Conferences

We define reconsideration, a non-prioritized belief change operation on a finite set of base beliefs. Reconsideration is a hindsight belief change repair that eliminates negative effects caused by the order of previously executed belief change operations. Beliefs that had previously been removed are returned to the base if there no longer are valid reasons for their removal. This might result in less preferred beliefs being removed, and additional beliefs being returned. The end product is an optimization of the belief base, converting the results of a series of revisions to the very base that would have resulted from a batch revision performed after all base beliefs were entered/added. Reconsideration can be done by examining the entire set of all base beliefs (both currently believed and retracted) -- or, if the believed base is consistent, by examining all retracted beliefs for possible return. This, however, is computationally expensive. We present a more efficient, TMSfriendly algorithm, dependency-directed reconsideration (DDR), which can produce the same results by examining only a dynamically determined subset of base beliefs that are actually affected by changes made since the last base optimization process. DDR is an efficient, anytime, belief base optimizing algorithm that eliminates operation order effects.


Knowledge State Reconsideration: Hindsight Belief Revision

AAAI Conferences

If p was not yet added to the base, and we had an IAT-preference ordering that ordered the beliefs in the following sequence from most to least preferred: p, p q, p r, m r, s t, w v, w k, p v, z v, n, r, w, s, v, m, z, q, t, k, then the optimal base would be B1 {p, p q, p r, m r, s t, w v, w k, p v, z v, n, w, s, m, z}. The semi-revision addition of p (preferred over p) followed by reconsideration is described in Example2.


Probabilistic Belief Change: Expansion, Conditioning and Constraining

arXiv.org Artificial Intelligence

The AGM theory of belief revision has become an important paradigm for investigating rational belief changes. Unfortunately, researchers working in this paradigm have restricted much of their attention to rather simple representations of belief states, namely logically closed sets of propositional sentences. In our opinion, this has resulted in a too abstract categorisation of belief change operations: expansion, revision, or contraction. Occasionally, in the AGM paradigm, also probabilistic belief changes have been considered, and it is widely accepted that the probabilistic version of expansion is conditioning. However, we argue that it may be more correct to view conditioning and expansion as two essentially different kinds of belief change, and that what we call constraining is a better candidate for being considered probabilistic expansion.