Towards Providing Explanations for AI Planner Decisions

arXiv.org Artificial Intelligence

In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why. To overcome this problem, the underlying AI process must produce justifications and explanations that are both transparent and comprehensible to the user. AI Planning is well placed to be able to address this challenge. In this paper we present a methodology to provide initial explanations for the decisions made by the planner. Explanations are created by allowing the user to suggest alternative actions in plans and then compare the resulting plans with the one found by the planner. The methodology is implemented in the new XAI-Plan framework.


Intelligent Rover Execution for Detecting Life in the Atacama Desert

AAAI Conferences

On-board supervisory execution is crucial for the deployment of more capable and autonomous remote explorers. Planetary science is considering robotic explorers operating for long periods of time without ground supervision while interacting with a changing and often hostile environment. Effective and robust operations require on-board supervisory control with a high level of awareness of the principles of functioning of the environment and of the numerous internal subsystems that need to be coordinated. We describe an on-board rover executive that was deployed on a rover as past of the "Limits of Life in the Atacama Desert (LITA)" field campaign sponsored by the NASA ASTEP program. The executive was built using the Intelligent Distributed Execution Architecture (IDEA), an execution framework that uses model-based and plan-based supervisory control as its fundamental computational paradigm. We present the results of the third field experiment conducted in the Atacama desert (Chile) in August - October 2005.


Planning with Explanatory Actions: A Joint Approach to Plan Explicability and Explanations in Human-Aware Planning

arXiv.org Artificial Intelligence

In this work, we formulate the process of generating explanations as model reconciliation for planning problems as one of planning with explanatory actions. We show that these problems could be better understood within the framework of epistemic planning and that, in fact, most earlier works on explanation as model reconciliation correspond to tractable subsets of epistemic planning problems. We empirically show how our approach is computationally more efficient than existing techniques for explanation generation and also discuss how this particular approach could be extended to capture most of the existing variants of explanation as model reconciliation. We end the paper with a discussion of how this formulation could be extended to generate novel explanatory behaviors.


Balancing Explicability and Explanation in Human-Aware Planning

AAAI Conferences

Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process.This can involve generating plans that are explicable to a human observer as well as the ability to provide explanations when such plans cannot be generated. This has led to the notion "multi-model planning'' which aim to incorporate effects of human expectation in the deliberative process of a planner — either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA.This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process.We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.


Hybrid Planning with Temporally Extended Goals for Sustainable Ocean Observing

AAAI Conferences

A challenge to modeling and monitoring the health of the ocean environment is that it is largely under sensed and difficult to sense remotely. Autonomous underwater vehicles (AUVs) can improve observability, for example of algal bloom regions, ocean acidification, and ocean circulation. This AUV paradigm, however, requires robust operation that is cost effective and responsive to the environment. To achieve low cost we generate operational sequences automatically from science goals, and achieve robustness by reasoning about the discrete and continuous effects of actions. We introduce Kongming2, a generative planner for hybrid systems with temporally extended goals (TEGs) and temporally flexible actions. It takes as input high level goals and outputs trajectories and actions of the hybrid system, for example an AUV. Kongming2 makes two major extensions to Kongming1: planning for TEGs, and planning with temporally flexible actions. We demonstrated a proof of concept of the planner in the Atlantic ocean on Odyssey IV, an AUV designed and built by the MIT AUV Lab at Sea Grant.