Chakraborti, Tathagata
Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation
Sreedharan, Sarath (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences.However, often the human's mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed.In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating {\em conformant explanations} that are applicable to a set of possible models.We also show how such explanations can contain superfluous informationand how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human.We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.
Balancing Explicability and Explanation in Human-Aware Planning
Sreedharan, Sarath (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process.This can involve generating plans that are explicable to a human observer as well as the ability to provide explanations when such plans cannot be generated. This has led to the notion "multi-model planning'' which aim to incorporate effects of human expectation in the deliberative process of a planner — either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA.This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process.We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.
Explanations as Model Reconciliation — A Multi-Agent Perspective
Sreedharan, Sarath (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
In this paper, we demonstrate how a planner (or a robot as an embodiment of it) can explain its decisions to multiple agents in the loop together considering not only the model that it used to come up with its decisions but also the (often misaligned) models of the same task that the other agents might have had. To do this, we build on our previous work on multi-model explanation generation and extend it to account for settings where there is uncertainty of the robot's model of the explainee and/or there are multiple explainees with different models to explain to. We will illustrate these concepts in a demonstration on a robot involved in a typical search and reconnaissance scenario with another human teammate and an external human supervisor.