Kambhampati, Subbarao


Explicablility as Minimizing Distance from Expected Behavior

arXiv.org Artificial Intelligence

In order to have effective human AI collaboration, it is not simply enough to address the question of autonomy; an equally important question is, how the AI's behavior is being perceived by their human counterparts. When AI agent's task plans are generated without such considerations, they may often demonstrate inexplicable behavior from the human's point of view. This problem arises due to the human's partial or inaccurate understanding of the agent's planning process and/or the model. This may have serious implications on human-AI collaboration, from increased cognitive load and reduced trust in the agent, to more serious concerns of safety in interactions with physical agent. In this paper, we address this issue by modeling the notion of plan explicability as a function of the distance between a plan that agent makes and the plan that human expects it to make. To this end, we learn a distance function based on different plan distance measures that can accurately model this notion of plan explicability, and develop an anytime search algorithm that can use this distance as a heuristic to come up with progressively explicable plans. We evaluate the effectiveness of our approach in a simulated autonomous car domain and a physical service robot domain. We provide empirical evaluations that demonstrate the usefulness of our approach in making the planning process of an autonomous agent conform to human expectations.


UbuntuWorld 1.0 LTS - A Platform for Automated Problem Solving & Troubleshooting in the Ubuntu OS

arXiv.org Artificial Intelligence

In this paper, we present UbuntuWorld 1.0 LTS - a platform for developing automated technical support agents in the Ubuntu operating system. Specifically, we propose to use the Bash terminal as a simulator of the Ubuntu environment for a learning-based agent and demonstrate the usefulness of adopting reinforcement learning (RL) techniques for basic problem solving and troubleshooting in this environment. We provide a plug-and-play interface to the simulator as a python package where different types of agents can be plugged in and evaluated, and provide pathways for integrating data from online support forums like AskUbuntu into an automated agent's learning process. Finally, we show that the use of this data significantly improves the agent's learning efficiency. We believe that this platform can be adopted as a real-world test bed for research on automated technical support.


Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy

arXiv.org Artificial Intelligence

When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior. Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where the humans have domain and task models that differ significantly from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a "model reconciliation problem" (MRP), where the AI system in effect suggests changes to the human's model, so as to make its plan be optimal with respect to that changed human model. We will study the properties of such explanations, present algorithms for automatically computing them, and evaluate the performance of the algorithms.


A Formal Analysis of Required Cooperation in Multi-Agent Planning

AAAI Conferences

It is well understood that,through cooperation, multiple agents can achieve tasks that are unachievable by a single agent.However, there are no formal characterizations of situations where cooperation is required to achieve a goal, thus warranting the application of multiple agents. In this paper, we provide such a formal characterization for multi-agent planning problems with sequential action execution. We first show that determining whether there is required cooperation (RC) is in general intractable even in this limited setting. As a result, we start our analysis with a subset of more restrictive problems where agents are homogeneous.For such problems, we identify two conditions that can cause RC. We establish that when none of these conditions hold, the problem is single-agent solvable;otherwise, we provide upper bounds on the minimum number of agents required. For the remaining problems with heterogeneous agents, we further divide them into two subsets.For one of the subsets,we propose the concept of {\em transformer agent} to reduce the number of agents to be considered which is used to improve planning performance.We implemented a planner using our theoretical results and compared it with one of the best IPC CoDMAP planners in the centralized track.Results show that our planner provides significantly improved performance on IPC CoDMAP domains.


Plan Explicability and Predictability for Robot Task Planning

arXiv.org Artificial Intelligence

Intelligent robots and machines are becoming pervasive in human populated environments. A desirable capability of these agents is to respond to goal-oriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave unexpectedly. Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans. While there exists previous work that studied socially acceptable robots that interact with humans in "natural ways", and work that investigated legible motion planning, there lacks a general solution for high level task planning. To address this issue, we introduce the notions of plan {\it explicability} and {\it predictability}. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. These measures can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans. We provide evaluations on a synthetic domain and with human subjects using physical robots to show the effectiveness of our approach


A Heuristic Approach to Planning with Incomplete STRIPS Action Models

AAAI Conferences

Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task, thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In this paper, we study planning problems with incomplete STRIPS domain models where the annotations specify possible preconditions and effects of actions. We show that the problem of assessing the quality of a plan, or its plan robustness, is #P-complete, establishing its equivalence with the weighted model counting problems. We introduce two approximations, lower and upper bound, for plan robustness, and then utilize them to derive heuristics for synthesizing robust plans. Our planning system, PISA, incorporating stochastic local search with these novel techniques outperforms a state-of-the-art planner handling incomplete domains in most of the tested domains, both in terms of plan quality and planning time.


Refining Incomplete Planning Domain Models Through Plan Traces

AAAI Conferences

Most existing work on learning planning models assumes that the entire model needs to be learned from scratch. A more realistic situation is that the planning agent has an incomplete model which it needs to refine through learning. In this paper we propose and evaluate a method for doing this. Our method takes as input an incomplete model (with missing preconditions and effects in the actions), as well as a set of plan traces that are known to be correct. It outputs a refined model that not only captures additional precondition/effect knowledge about the given actions, but also macro actions. We use a MAX-SAT framework for learning, where the constraints are derived from the executability of the given plan traces, as well as the preconditions/effects of the given incomplete model. Unlike traditional macro-action learners which use macros to increase the efficiency of planning (in the context of a complete model), our motivation for learning macros is to increase the accuracy (robustness) of the plans generated with the refined model. We demonstrate the effectiveness of our approach through a systematic empirical evaluation.


Model-Lite Case-Based Planning

AAAI Conferences

There is increasing awareness in the planning community that depending on complete models impedes the applicability of planning technology in many real world domains where the burden of specifying complete domain models is too high. In this paper, we consider a novel solution for this challenge that combines generative planning on incomplete domain models with a library of plan cases that are known to be correct. While this was arguably the original motivation for case-based planning, most existing case-based planners assume (and depend on) from-scratch planners that work on complete domain models. In contrast, our approach views the plan generated with respect to the incomplete model as a ``skeletal plan'' and augments it with directed mining of plan fragments from library cases. We will present the details of our approach and present an empirical evaluation of our method in comparison to a state-of-the-art case-based planner that depends on complete domain models.


Loosely Coupled Formulations for Automated Planning: An Integer Programming Perspective

arXiv.org Artificial Intelligence

We represent planning as a set of loosely coupled network flow problems, where each network corresponds to one of the state variables in the planning domain. The network nodes correspond to the state variable values and the network arcs correspond to the value transitions. The planning problem is to find a path (a sequence of actions) in each network such that, when merged, they constitute a feasible plan. In this paper we present a number of integer programming formulations that model these loosely coupled networks with varying degrees of flexibility. Since merging may introduce exponentially many ordering constraints we implement a so-called branch-and-cut algorithm, in which these constraints are dynamically generated and added to the formulation when needed. Our results are very promising, they improve upon previous planning as integer programming approaches and lay the foundation for integer programming approaches for cost optimal planning.


Synthesizing Robust Plans under Incomplete Domain Models

arXiv.org Artificial Intelligence

Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In such cases, the goal should be to generate plans that are robust with respect to any known incompleteness of the domain. In this paper, we first introduce annotations expressing the knowledge of the domain incompleteness, and formalize the notion of plan robustness with respect to an incomplete domain model. We then propose an approach to compiling the problem of finding robust plans to the conformant probabilistic planning problem. We present experimental results with Probabilistic-FF, a state-of-the-art planner, showing the promise of our approach.