Chakraborti, Tathagata
UbuntuWorld 1.0 LTS — A Platform for Automated Problem Solving & Troubleshooting in the Ubuntu OS
Chakraborti, Tathagata (Arizona State University) | Talamadupula, Kartik (IBM T.J. Watson Research Center) | Fadnis, Kshitij P. (IBM T.J. Watson Research Center) | Campbell, Murray (IBM T.J. Watson Research Center) | Kambhampati, Subbarao (Arizona State University)
In this paper, we present UbuntuWorld 1.0 LTS - a platform for developing automated technical support agents in the Ubuntu operating system. Specifically, we propose to use the Bash terminal as a simulator of the Ubuntu environment for a learning-based agent and demonstrate the usefulness of adopting reinforcement learning (RL) techniques for basic problem solving and troubleshooting in this environment. We provide a plug-and-play interface to the simulator as a python package where different types of agents can be plugged in and evaluated, and provide pathways for integrating data from online support forums like Ask Ubuntu into an automated agent’s learning process. Finally, we show that the use of this data significantly improves the agent’s learning efficiency. We believe that this platform can be adopted as a real-world test bed for research on automated technical support.
Plan Explicability and Predictability for Robot Task Planning
Zhang, Yu, Sreedharan, Sarath, Kulkarni, Anagha, Chakraborti, Tathagata, Zhuo, Hankz Hankui, Kambhampati, Subbarao
Intelligent robots and machines are becoming pervasive in human populated environments. A desirable capability of these agents is to respond to goal-oriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave unexpectedly. Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans. While there exists previous work that studied socially acceptable robots that interact with humans in "natural ways", and work that investigated legible motion planning, there lacks a general solution for high level task planning. To address this issue, we introduce the notions of plan {\it explicability} and {\it predictability}. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. These measures can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans. We provide evaluations on a synthetic domain and with human subjects using physical robots to show the effectiveness of our approach
A Formal Framework for Studying Interaction in Human-Robot Societies
Chakraborti, Tathagata (Arizona State University) | Talamadupula, Kartik (IBM Thomas J. Watson Research Center) | Zhang, Yu (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
As robots evolve into an integral part of the human ecosystem, humans and robots will be involved in a multitude of collaborative tasks that require complex coordination and cooperation. Indeed there has been extensive work in the robotics, planning as well as the human-robot interaction communities to understand and facilitate such seamless teaming. However, it has been argued that their increased participation as independent autonomous agents in hitherto human-habited environments has introduced many new challenges to the view of traditional human-robot teaming. When robots are deployed with independent and often self-sufficient tasks in a shared workspace, teams are often not formed explicitly and multiple teams cohabiting an environment interact more like colleagues rather than teammates. In this paper, we formalize these differences and analyze metrics to characterize autonomous behavior in such human-robot cohabitation settings.
A Game Theoretic Approach to Ad-Hoc Coalitions in Human-Robot Societies
Chakraborti, Tathagata (Arizona State University) | Meduri, Venkata Vamsikrishna (Arizona State University) | Dondeti, Vivek (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
As robots evolve into fully autonomous agents, settings involving human-robot teams will evolve into human-robot societies, where multiple independent agents and teams, both humans and robots, coexist and work in harmony. Given such a scenario, the question we ask is - How can two or more such agents dynamically form coalitions or teams for mutual benefit with minimal prior coordination? In this work, we provide a game theoretic solution to address this problem. We will first look at a situation with full information, provide approximations to compute the extensive form game more efficiently, and then extend the formulation to account for scenarios when the human is not totally confident of its potential partner's intentions. Finally we will look at possible extensions of the game, that can capture different aspects of decision making with respect to ad-hoc coalition formation in human-robot societies.
AI-MIX: Using Automated Planning to Steer Human Workers Towards Better Crowdsourced Plans
Manikonda, Lydia (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | De, Sushovan (Arizona State University) | Talamadupula, Kartik (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Human computation applications that involve planning and scheduling are gaining popularity, and the existing literature on such systems shows that any automated oversight on human contributors improves the effectiveness of the crowd. In this paper, we present our ongoing work on the AI-MIX system, which is a first step towards using an automated planning and scheduling system in a crowdsourced planning application. In order to address the mismatch between the capabilities of the crowd and the automated planner, we identify two major challenges -- interpretation, and steering. We also present preliminary empirical results over the tour planning domain, and show how using an automated planner can help improve the quality of plans.
AI-MIX: Using Automated Planning to Steer Human Workers Towards Better Crowdsourced Plans
Manikonda, Lydia (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | De, Sushovan (Arizona State University) | Talamadupula, Kartik (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
One subclass of human computation applications are those directed at tasks that involve planning (e.g. tour planning) and scheduling (e.g. conference scheduling). Interestingly, work on these systems shows that even primitive forms of automated oversight on the human contributors helps in significantly improving the effectiveness of the humans/crowd. In this paper, we argue that the automated oversight used in these systems can be viewed as a primitive automated planner, and that there are several opportunities for more sophisticated automated planning in effectively steering the crowd. Straightforward adaptation of current planning technology is however hampered by the mismatch between the capabilities of human workers and automated planners. We identify and partially address two important challenges that need to be overcome before such adaptation of planning technology can occur: (1 interpreting inputs of the human workers (and the requester) and (2) steering or critiquing plans produced by the human workers, armed only with incomplete domain and preference models. To these ends, we describe the implementation of AI-MIX, a tour plan generation system that uses automated checks and alerts to improve the quality of plans created by human workers; and present a preliminary evaluation of the effectiveness of steering provided by automated planning.