Explanations as Model Reconciliation — A Multi-Agent Perspective
Sreedharan, Sarath (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
In this paper, we demonstrate how a planner (or a robot as an embodiment of it) can explain its decisions to multiple agents in the loop together considering not only the model that it used to come up with its decisions but also the (often misaligned) models of the same task that the other agents might have had. To do this, we build on our previous work on multi-model explanation generation and extend it to account for settings where there is uncertainty of the robot's model of the explainee and/or there are multiple explainees with different models to explain to. We will illustrate these concepts in a demonstration on a robot involved in a typical search and reconnaissance scenario with another human teammate and an external human supervisor.
- Technology: