Iterated Belief Change Due to Actions and Observations

AAAI Conferences

In action domains where agents may have erroneous beliefs, reasoning about the effects of actions involves reasoning about belief change. In this paper, we use a transition system approach to reason about the evolution of an agent's beliefs as actions are executed. Some actions cause an agent to perform belief revision while others cause an agent to perform belief update, but the interaction between revision and update can be nonelementary. We present a set of rationality properties describing the interaction between revision and update, and we introduce a new class of belief change operators for reasoning about alternating sequences of revisions and updates. Our belief change operators can be characterized in terms of a natural shifting operation on total pre-orderings over interpretations. We compare our approach with related work on iterated belief change due to action, and we conclude with some directions for future research.


Belief Change with Uncertain Action Histories

Journal of Artificial Intelligence Research

We consider the iterated belief change that occurs following an alternating sequence of actions and observations. At each instant, an agent has beliefs about the actions that have occurred as well as beliefs about the resulting state of the world. We represent such problems by a sequence of ranking functions, so an agent assigns a quantitative plausibility value to every action and every state at each point in time. The resulting formalism is able to represent fallible belief, erroneous perception, exogenous actions, and failed actions. We illustrate that our framework is a generalization of several existing approaches to belief change, and it appropriately captures the non-elementary interaction between belief update and belief revision.


Belief Change in the Context of Fallible Actions and Observations

AAAI Conferences

We consider the iterated belief change that occurs following an alternating sequence of actions and observations. At each instant, an agent has some beliefs about the action that occurs as well as beliefs about the resulting state of the world. We represent such problems by a sequence of ranking functions, so an agent assigns a quantitative plausibility value to every action and every state at each point in time. The resulting formalism is able to represent fallible knowledge, erroneous perception, exogenous actions, and failed actions. We illustrate that our framework is a generalization of several existing approaches to belief change, and it appropriately captures the non-elementary interaction between belief update and belief revision.


An Explicit Model of Belief Change for Cryptographic Protocol Verification

AAAI Conferences

Cryptographic protocols are structured sequences of messages that are used for exchanging information in a hostile environment. Many protocols have epistemic goals: a successful run of the protocol is intended to cause a participant to hold certain beliefs. As such, epistemic logics have been employed for the verification of cryptographic protocols. Although this approach to verification is explicitly concerned with changing beliefs, formal belief change operators have not been incorporated in previous work. In this preliminary paper, we introduce a new approach to protocol verification by combining a monotonic logic with a nonmonotonic belief change operator. In this context, a protocol participant is able to retract beliefs in response to new information and a protocol participant is able to postulate the most plausible event explaining new information. Hence, protocol participants may draw conclusions from received messages in the same manner conclusions are drawn in formalizations of commonsense reasoning. We illustrate that this kind of reasoning is particularly important when protocol participants have incorrect beliefs.


Belief Revision with Sensing and Fallible Actions

AAAI Conferences

An agent will generally have incomplete and possibly inaccurate knowledge about its environment. In addition, such an agent may receive erroneous information, perhaps in being misinformed about the truth of some formula. In this paper we present a general approach to reasoning about action and belief change in such a setting. An agent may carry out actions, but in some cases may inadvertently execute the wrong one (for example, pushing an unintended button). As well, an agent may sense whether a condition holds, and may revise its beliefs after being told that a formula is true. Our approach is based on an epistemic extension to basic action theories expressed in the situation calculus, augmented by a plausibility relation over situations. This plausibility relation can be thought of as characterising the agent's overall belief state; as such it keeps track of not just the formulas that the agent believes to hold, but also the plausibility of formulas that it does not believe to hold. The agent's belief state is updated by suitably modifying the plausibility relation following the execution of an action. We show that our account generalises previous approaches, and fully handles belief revision, sensing, and erroneous actions.