Goto

Collaborating Authors

 Smith, David E.


A Unifying Bayesian Formulation of Measures of Interpretability in Human-AI

arXiv.org Artificial Intelligence

Existing approaches for generating human-aware agent behaviors have considered different measures of interpretability in isolation. Further, these measures have been studied under differing assumptions, thus precluding the possibility of designing a single framework that captures these measures under the same assumptions. In this paper, we present a unifying Bayesian framework that models a human observer's evolving beliefs about an agent and thereby define the problem of Generalized Human-Aware Planning. We will show that the definitions of interpretability measures like explicability, legibility and predictability from the prior literature fall out as special cases of our general framework. Through this framework, we also bring a previously ignored fact to light that the human-robot interactions are in effect open-world problems, particularly as a result of modeling the human's beliefs over the agent. Since the human may not only hold beliefs unknown to the agent but may also form new hypotheses about the agent when presented with novel or unexpected behaviors.


Contrastive Explanations of Plans Through Model Restrictions

arXiv.org Artificial Intelligence

In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user's expectation. We frame Explainable AI Planning in the context of the plan negotiation problem, in which a succession of hypothetical planning problems are generated and solved. The object of the negotiation is for the user to understand and ultimately arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are contrastive, i.e. "why A rather than B?". We use the data from this study to construct a taxonomy of user questions that often arise during plan negotiation. We formally define our approach to plan negotiation through model restriction as an iterative process. This approach generates hypothetical problems and contrastive plans by restricting the model through constraints implied by user questions. We formally define model-based compilations in PDDL2.1 of each constraint derived from a user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework that employs iterative model restriction. We demonstrate its benefits in a second user study.


A Bayesian Account of Measures of Interpretability in Human-AI Interaction

arXiv.org Artificial Intelligence

Existing approaches for the design of interpretable agent behavior consider different measures of interpretability in isolation. In this paper we posit that, in the design and deployment of human-aware agents in the real world, notions of interpretability are just some among many considerations; and the techniques developed in isolation lack two key properties to be useful when considered together: they need to be able to 1) deal with their mutually competing properties; and 2) an open world where the human is not just there to interpret behavior in one specific form. To this end, we consider three well-known instances of interpretable behavior studied in existing literature -- namely, explicability, legibility, and predictability -- and propose a revised model where all these behaviors can be meaningfully modeled together. We will highlight interesting consequences of this unified model and motivate, through results of a user study, why this revision is necessary.


FAPE: a Constraint-based Planner for Generative and Hierarchical Temporal Planning

arXiv.org Artificial Intelligence

Temporal planning offers numerous advantages when based on an expressive representation. Timelines have been known to provide the required expressiveness but at the cost of search efficiency. We propose here a temporal planner, called FAPE, which supports many of the expressive temporal features of the ANML modeling language without loosing efficiency. FAPE's representation coherently integrates flexible timelines with hierarchical refinement methods that can provide efficient control knowledge. A novel reachability analysis technique is proposed and used to develop causal networks to constrain the search space. It is employed for the design of informed heuristics, inference methods and efficient search strategies. Experimental results on common benchmarks in the field permit to assess the components and search strategies of FAPE, and to compare it to IPC planners. The results show the proposed approach to be competitive with less expressive planners and often superior when hierarchical control knowledge is provided. FAPE, a freely available system, provides other features, not covered here, such as the integration of planning with acting, and the handling of sensing actions in partially observable environments.


Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior

arXiv.org Artificial Intelligence

There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop. However, the work in this area has typically lacked coherence on the topic, with proposed solutions for "explicable", "legible", "predictable" and "transparent" planning with overlapping, and sometimes conflicting, semantics all aimed at some notion of understanding what intentions the observer will ascribe to an agent by observing its behavior. This is also true for the recent works on "security" and "privacy" of plans which are also trying to answer the same question, but from the opposite point of view -- i.e. when the agent is trying to hide instead of revealing its intentions. This paper attempts to provide a workable taxonomy of relevant concepts in this exciting and emerging field of inquiry.


Goal Recognition with Noisy Observations

AAAI Conferences

It may (2010) to estimate the probability of each possible goal be that one agent needs to monitor the activities of another based on the difference between the cost of the best plan agent, attempt to assist the other agent, or simply avoid getting for the goal given the observed actions, Cost(G O), and the in the way while performing its own duties. For all of cost of the best plan for the goal without the observed actions, these cases the agent needs to be able to realize what the Cost(G O). The big difference here is that the observations other agent is doing. In the absence of full and timely communication only indirectly give us probabilities for actions in of plans and goals, goal and plan recognition becomes the plan graph. We therefore first construct a Bayesian Network essential. Many goal recognition techniques allow the (BN) to estimate these action probabilities, and then sequence of observations to be incomplete, but few consider use this probability information in the plan graph to compute the possibility of noisy observations. In practice, this is not expected cost for each goal, given the observations.


Compiling Away Uncertainty in Strong Temporal Planning with Uncontrollable Durations

AAAI Conferences

Real world temporal planning often involves dealing with uncertainty about the duration of actions. In this paper, we describe a sound-and-complete compilation technique for strong planning that reduces any planning instance with uncertainty in the duration of actions to a plain temporal planning problem without uncertainty. We evaluate our technique by comparing it with a recent technique for PDDL domains with temporal uncertainty. The experimental results demonstrate the practical applicability of our approach and show complementary behavior with respect to previous techniques. We also demonstrate the high expressiveness of the translation by applying it to a significant fragment of the ANML language.


A Fast Goal Recognition Technique Based on Interaction Estimates

AAAI Conferences

Goal Recognition is the task of inferring an actor's goals given some or all of the actor's observed actions. There is considerable interest in Goal Recognition for use in intelligent personal assistants, smart environments, intelligent tutoring systems, and monitoring user's needs. In much of this work, the actor's observed actions are compared against a generated library of plans. Recent work by Ramirez and Geffner makes use of AI planning to determine how closely a sequence of observed actions matches plans for each possible goal. For each goal, this is done by comparing the cost of a plan for that goal with the cost of a plan for that goal that includes the observed actions. This approach yields useful rankings, but is impractical for real-time goal recognition in large domains because of the computational expense of constructing plans for each possible goal. In this paper, we introduce an approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities. We show that this approach is much faster, but still yields high quality results.


Planning as an Iterative Process

AAAI Conferences

Activity planning for missions such as the Mars Exploration Rover mission presents many technical challenges, including oversubscription, consideration of time, concurrency, resources, preferences, and uncertainty. These challenges have all been addressed by the research community to varying degrees, but significant technical hurdles still remain. In addition, the integration of these capabilities into a single planning engine remains largely unaddressed. However, I argue that there is a deeper set of issues that needs to be considered -- namely the integration of planning into an iterative process that begins before the goals, objectives, and preferences are fully defined. This introduces a number of technical challenges for planning, including the ability to more naturally specify and utilize constraints on the planning process, the ability to generate multiple qualitatively different plans, and the ability to provide deep explanation of plans.


Probabilistic Plan Graph Heuristic for Probabilistic Planning

AAAI Conferences

This work focuses on developing domain-independent heuristics for probabilistic planning problems characterized by full observability and non-deterministic effects of actions that are expressed by probability distributions. The approach is to first search for a high probability deterministic plan using a classical planner. A novel probabilistic plan graph heuristic is used to guide the search towards high probability plans. The resulting plans can be used in a system that handles unexpected outcomes by runtime replanning. The plans can also be incrementally augmented with contingency branches for the most critical action outcomes. This abstract will describe the steps that we have taken in completing the above work and the obtained results.