Chakraborti, Tathagata
Planning with Explanatory Actions: A Joint Approach to Plan Explicability and Explanations in Human-Aware Planning
Sreedharan, Sarath, Chakraborti, Tathagata, Muise, Christian, Kambhampati, Subbarao
In this work, we formulate the process of generating explanations as model reconciliation for planning problems as one of planning with explanatory actions. We show that these problems could be better understood within the framework of epistemic planning and that, in fact, most earlier works on explanation as model reconciliation correspond to tractable subsets of epistemic planning problems. We empirically show how our approach is computationally more efficient than existing techniques for explanation generation and also discuss how this particular approach could be extended to capture most of the existing variants of explanation as model reconciliation. We end the paper with a discussion of how this formulation could be extended to generate novel explanatory behaviors.
The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction
Williams, Tom (Colorado School of Mines) | Szafir, Daniel (University of Colorado Boulder) | Chakraborti, Tathagata (Arizona State University) | Amor, Heni Ben (Arizona State University)
The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) was held in 2018 in conjunction with the 13th International Conference on Human-Robot Interaction, and brought together researchers from the fields of Human-Robot Interaction (HRI), Robotics, Artificial Intelligence, and Virtual, Augmented, and Mixed Reality in order to identify challenges in mixed reality interactions between humans and robots. This inaugural workshop featured a keynote talk from Blair MacIntyre (Mozilla, Georgia Tech), a panel discussion, and twenty-nine papers presented as lightning talks and/or posters. In this report, we briefly survey the papers presented at the workshop and outline some potential directions for the community.
Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior
Chakraborti, Tathagata, Kulkarni, Anagha, Sreedharan, Sarath, Smith, David E., Kambhampati, Subbarao
There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop. However, the work in this area has typically lacked coherence on the topic, with proposed solutions for "explicable", "legible", "predictable" and "transparent" planning with overlapping, and sometimes conflicting, semantics all aimed at some notion of understanding what intentions the observer will ascribe to an agent by observing its behavior. This is also true for the recent works on "security" and "privacy" of plans which are also trying to answer the same question, but from the opposite point of view -- i.e. when the agent is trying to hide instead of revealing its intentions. This paper attempts to provide a workable taxonomy of relevant concepts in this exciting and emerging field of inquiry.
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Sengupta, Sailik (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Recent works on gradient-based attacks and universal perturbations can adversarially modify images to bring down the accuracy of state-of-the-art classification techniques based on deep neural networks to as low as 10% on popular datasets like MNIST and ImageNet. The design of general defense strategies against a wide range of such attacks remains a challenging problem. In this paper, we derive inspiration from recent advances in the fields of cybersecurity and multi-agent systems and propose to use the concept of Moving Target Defense (MTD) for increasing the robustness of a set of deep networks against such adversarial attacks.ย To this end, we formalize and exploit the notion of differential immunity of an ensemble of networks to specific attacks.ย To classify an input image, a trained network is picked from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) Users as a repeated Bayesian Stackelberg Game (BSG).We empirically show that our approach, MTDeep reduces misclassification on perturbed images for MNIST and ImageNet datasets while maintaining high classification accuracy on legitimate test images.ย Lastly, we demonstrate that our framework can be used in conjunction with any existing defense mechanism to provide more resilience to adversarial attacks than those defense mechanisms by themselves.
User Interfaces and Scheduling and Planning: Workshop Summary and Proposed Challenges
Freedman, Richard G. (University of Massachusetts Amherst) | Chakraborti, Tathagata (Arizona State University) | Talamadupula, Kartik (IBM Research) | Magazzeni, Daniele (King's College London) | Frank, Jeremy D. (NASA Ames Research Center)
The User Interfaces and Scheduling and Planning (UISP) Workshop had its inaugural meeting at the 2017 International Conference on Automated Scheduling and Planning (ICAPS). The UISP community focuses on bridging the gap between automated planning and scheduling technologies and user interface (UI) technologies. Planning and scheduling systems need UIs, and UIs can be designed and built using planning and scheduling systems. The workshop participants included representatives from government organizations, industry, and academia with various insights and novel challenges. We summarize the discussions from the workshop as well as outline challenges related to this area of research, introducing the now formally established field to the broader user experience and artificial intelligence communities.
Explicablility as Minimizing Distance from Expected Behavior
Kulkarni, Anagha, Zha, Yantian, Chakraborti, Tathagata, Vadlamudi, Satya Gautam, Zhang, Yu, Kambhampati, Subbarao
In order to have effective human AI collaboration, it is not simply enough to address the question of autonomy; an equally important question is, how the AI's behavior is being perceived by their human counterparts. When AI agent's task plans are generated without such considerations, they may often demonstrate inexplicable behavior from the human's point of view. This problem arises due to the human's partial or inaccurate understanding of the agent's planning process and/or the model. This may have serious implications on human-AI collaboration, from increased cognitive load and reduced trust in the agent, to more serious concerns of safety in interactions with physical agent. In this paper, we address this issue by modeling the notion of plan explicability as a function of the distance between a plan that agent makes and the plan that human expects it to make. To this end, we learn a distance function based on different plan distance measures that can accurately model this notion of plan explicability, and develop an anytime search algorithm that can use this distance as a heuristic to come up with progressively explicable plans. We evaluate the effectiveness of our approach in a simulated autonomous car domain and a physical service robot domain. We provide empirical evaluations that demonstrate the usefulness of our approach in making the planning process of an autonomous agent conform to human expectations.
Mr. Jones โ Towards a Proactive Smart Room Orchestrator
Chakraborti, Tathagata (Arizona State University) | Talamadupula, Kartik (IBM T. J. Watson Research Center) | Dholakia, Mishal (IBM T. J. Watson Research Center) | Srivastava, Biplav (IBM T. J. Watson Research Center) | Kephart, Jeffrey O. (IBM T. J. Watson Research Center) | Bellamy, Rachel K. E. (IBM T. J. Watson Research Center)
In this brief abstract we report work in progress on developing Mr.Jones โ a proactive orchestrator and decision support agent for a collaborative decision making setting embodied by a smart room. The duties of such an agent may range across interactive problem solving with other agents in the environment, developing automated summaries of meetings, visualization of the internal decision making process, proactive data and resource management, and so on. Specifically, we highlight the importance of integrating higher level symbolic reasoning and intent recognition in the design of such an agent, and outline pathways towards the realization of these capabilities.We will demonstrate some of these functionalities here in the context of automated orchestration of a meeting in the CEL โ the Cognitive Environments Laboratory at IBM's T. J. Watson Research Center.
RADAR โ A Proactive Decision Support System for Human-in-the-Loop Planning
Sengupta, Sailik (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Sreedharan, Sarath (Arizona State University) | Vadlamudi, Satya Gautam (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Proactive Decision Support (PDS) aims at improving the decision making experience ofย human decision makers by enhancing both the quality of the decisions and the ease of making them. In this paper, we ask the question what role automated decision-making technologies can play in the deliberative process of the human decision maker.Specifically, we focus on expert humans in the loop who now share a detailed, if not complete, model of the domain with the assistant, but may still be unable to compute plans due to cognitive overload.ย To this end, we propose a PDS framework RADAR based on research in the automated planning community that aids the human decision maker in constructing plans.ย We will situate our discussion on principles of interface design laid out in the literature on the degrees of automation and its effect on the collaborative decision-making process. ย Also, at the heart of our design is the principle ofย naturalistic decision making which has been shown to be a necessary requirement of such systems, thus focusing more on providing suggestions rather than enforcing decisions and executing actions.ย We will demonstrate the different properties of such a system through examples in a fire-fighting domain, where human commanders are involved in building response strategies to mitigate a fire outbreak.The paper is written to serve both as a position paper by motivating requirements of an effective proactive decision support system, and also an emerging application of these ideas in the context of the role of an automated planner in human decision making, in a platform that can prove to be a valuable test bed for research on the same.
UbuntuWorld 1.0 LTS - A Platform for Automated Problem Solving & Troubleshooting in the Ubuntu OS
Chakraborti, Tathagata, Talamadupula, Kartik, Fadnis, Kshitij P., Campbell, Murray, Kambhampati, Subbarao
In this paper, we present UbuntuWorld 1.0 LTS - a platform for developing automated technical support agents in the Ubuntu operating system. Specifically, we propose to use the Bash terminal as a simulator of the Ubuntu environment for a learning-based agent and demonstrate the usefulness of adopting reinforcement learning (RL) techniques for basic problem solving and troubleshooting in this environment. We provide a plug-and-play interface to the simulator as a python package where different types of agents can be plugged in and evaluated, and provide pathways for integrating data from online support forums like AskUbuntu into an automated agent's learning process. Finally, we show that the use of this data significantly improves the agent's learning efficiency. We believe that this platform can be adopted as a real-world test bed for research on automated technical support.
Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy
Chakraborti, Tathagata, Sreedharan, Sarath, Zhang, Yu, Kambhampati, Subbarao
When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior. Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where the humans have domain and task models that differ significantly from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a "model reconciliation problem" (MRP), where the AI system in effect suggests changes to the human's model, so as to make its plan be optimal with respect to that changed human model. We will study the properties of such explanations, present algorithms for automatically computing them, and evaluate the performance of the algorithms.