Goto

Collaborating Authors

Behnke

AAAI Conferences

The integration of the various specialized components of cognitive systems poses a challenge, in particular for those architectures that combine planning, inference, and human-computer interaction (HCI). An approach is presented that exploits a single source of common knowledge contained in an ontology. Based upon the knowledge contained in it, specialized domain models for the cognitive systems' components can be generated automatically. Our integration targets planning in the form of hierarchical planning, being well-suited for HCI as it mimics planning done by humans. We show how the hierarchical structures of such planning domains can be (partially) inferred from declarative background knowledge.


Balancing Explicability and Explanation in Human-Aware Planning

AAAI Conferences

Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process.This can involve generating plans that are explicable to a human observer as well as the ability to provide explanations when such plans cannot be generated. This has led to the notion "multi-model planning'' which aim to incorporate effects of human expectation in the deliberative process of a planner — either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA.This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process.We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.


On the Relationship Between KR Approaches for Explainable Planning

arXiv.org Artificial Intelligence

In this paper, we build upon notions from knowledge representation and reasoning (KR) to expand a preliminary logic-based framework that characterizes the model reconciliation problem for explainable planning. We also provide a detailed exposition on the relationship between similar KR techniques, such as abductive explanations and belief change, and their applicability to explainable planning.


Towards Contrastive Explanations for Comparing the Ethics of Plans

arXiv.org Artificial Intelligence

We are interested in models where actions are deterministic, This can be done through contrastive explanations [5], durationless, and can be performed one at a time. We also which focus on explaining the difference between a factual assume a known initial state and goal. Traditionally, ethical event A and a contrasting event B. To produce these explanations, principles of single decisions are evaluated [1]. In the context one must reason about the hypothetical alternative B, of AI Planning this means analysing a massive number of which likely means constructing an alternative plan where B isolated decisions that may not make sense without the is included rather than A. The original model is constrained context in which they are being made. Therefore, it is to produce a hypothetical planning model (HModel). The preferable to evaluate the ethical contents of a plan as a solution to the HModel is the hypothetical plan (HPlan) that whole. Lindner et al. [2] describe an approach to judging contains the contrast case expected by the user.


Assumption-Based Planning: Generating Plans and Explanations under Incomplete Knowledge

AAAI Conferences

Many practical planning problems necessitate the generation of a plan under incomplete information about the state of the world. In this paper we propose the notion of Assumption-Based Planning. Unlike conformant planning, which attempts to find a plan under all possible completions of the initial state, an assumption-based plan supports the assertion of additional assumptions about the state of the world, often resulting in high quality plans where no conformant plan exists. We are interested in this paradigm of planning for two reasons: 1) it captures a compelling form of \emph{commonsense planning}, and 2) it is of great utility in the generation of explanations, diagnoses, and counter-examples -- tasks which share a computational core with We formalize the notion of assumption-based planning, establishing a relationship between assumption-based and conformant planning, and prove properties of such plans. We further provide for the scenario where some assumptions are more preferred than others. Exploiting the correspondence with conformant planning, we propose a means of computing assumption-based plans via a translation to classical planning. Our translation is an extension of the popular approach proposed by Palacios and Geffner and realized in their T0 planner. We have implemented our planner, A0, as a variant of T0 and tested it on a number of expository domains drawn from the International Planning Competition. Our results illustrate the utility of this new planning paradigm.