Goto

Collaborating Authors

 plausibility model


Adaptation and Communication in Human-Robot Teaming to Handle Discrepancies in Agents' Beliefs about Plans

Zhang, Yuening, Williams, Brian C.

arXiv.org Artificial Intelligence

When agents collaborate on a task, it is important that they have some shared mental model of the task routines -- the set of feasible plans towards achieving the goals. However, in reality, situations often arise that such a shared mental model cannot be guaranteed, such as in ad-hoc teams where agents may follow different conventions or when contingent constraints arise that only some agents are aware of. Previous work on human-robot teaming has assumed that the team has a set of shared routines, which breaks down in these situations. In this work, we leverage epistemic logic to enable agents to understand the discrepancy in each other's beliefs about feasible plans and dynamically plan their actions to adapt or communicate to resolve the discrepancy. We propose a formalism that extends conditional doxastic logic to describe knowledge bases in order to explicitly represent agents' nested beliefs on the feasible plans and state of execution. We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action, including communication actions to explain the feasibility of plans, announce intent, and ask questions. Finally, we evaluate the success rate and scalability of the algorithm and show that our agent is better equipped to work in teams without the guarantee of a shared mental model.


Bisimulation and expressivity for conditional belief, degrees of belief, and safe belief

Andersen, Mikkel Birkegaard, Bolander, Thomas, van Ditmarsch, Hans, Jensen, Martin Holm

arXiv.org Artificial Intelligence

Plausibility models are Kripke models that agents use to reason about knowledge and belief, both of themselves and of each other. Such models are used to interpret the notions of conditional belief, degrees of belief, and safe belief. The logic of conditional belief contains that modality and also the knowledge modality, and similarly for the logic of degrees of belief and the logic of safe belief. With respect to these logics, plausibility models may contain too much information. A proper notion of bisimulation is required that characterises them. We define that notion of bisimulation and prove the required characterisations: on the class of image-finite and preimage-finite models (with respect to the plausibility relation), two pointed Kripke models are modally equivalent in either of the three logics, if and only if they are bisimilar. As a result, the information content of such a model can be similarly expressed in the logic of conditional belief, or the logic of degrees of belief, or that of safe belief. This, we found a surprising result. Still, that does not mean that the logics are equally expressive: the logics of conditional and degrees of belief are incomparable, the logics of degrees of belief and safe belief are incomparable, while the logic of safe belief is more expressive than the logic of conditional belief. In view of the result on bisimulation characterisation, this is an equally surprising result. We hope our insights may contribute to the growing community of formal epistemology and on the relation between qualitative and quantitative modelling.


On the Progression of Knowledge and Belief for Nondeterministic Actions in the Situation Calculus

Fang, Liangda (Sun Yat-sen University) | Liu, Yongmei (Sun Yat-sen University) | Wen, Ximing (Guangdong Institute of Public Administration)

AAAI Conferences

In a seminal paper, Lin and Reiter introduced the notion of progression for basic action theories in the situation calculus. Recently, Fang and Liu extended the situation calculus to account for multi-agent knowledge and belief change. In this paper, based on their framework, we investigate progression of both belief and knowledge in the single-agent propositional case. We first present a model-theoretic definition of progression of knowledge and belief. We show that for propositional actions, i.e., actions whose precondition axioms and successor state axioms are propositional formulas, progression of knowledge and belief reduces to forgetting in the logic of knowledge and belief, which we show is closed under forgetting. Consequently, we are able to show that for propositional actions, progression of knowledge and belief is always definable in the logic of knowledge and belief.


Multiagent Knowledge and Belief Change in the Situation Calculus

Fang, Liangda (Sun Yat-sen University) | Liu, Yongmei (Sun Yat-sen University)

AAAI Conferences

Belief change is an important research topic in AI. It becomes more perplexing in multi-agent settings, since the action of an agent may be partially observable to other agents. In this paper, we present a general approach to reasoning about actions and belief change in multi-agent settings. Our approach is based on a multi-agent extension to the situation calculus, augmented by a plausibility relation over situations and another one over actions, which is used to represent agents' different perspectives on actions. When an action is performed, we update the agents' plausibility order on situations by giving priority to the plausibility order on actions, in line with the AGM approach of giving priority to new information. We show that our notion of belief satisfies KD45 properties. As to the special case of belief change of a single agent, we show that our framework satisfies most of the classical AGM, KM, and DP postulates. We also present properties concerning the change of common knowledge and belief of a group of agents.


Evidence and plausibility in neighborhood structures

van Benthem, Johan, Fernández-Duque, David, Pacuit, Eric

arXiv.org Artificial Intelligence

The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an {\em evidence logic} for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood $N$ indicates that the agent has reason to believe that the true state of the world lies in $N$. Further notions of relative plausibility between worlds and beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a $p$-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting `uniform' and `flat' models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.