Arizona State University
Balancing Explicability and Explanation in Human-Aware Planning
Sreedharan, Sarath (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process.This can involve generating plans that are explicable to a human observer as well as the ability to provide explanations when such plans cannot be generated. This has led to the notion "multi-model planning'' which aim to incorporate effects of human expectation in the deliberative process of a planner — either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA.This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process.We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.
Explanations as Model Reconciliation — A Multi-Agent Perspective
Sreedharan, Sarath (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
In this paper, we demonstrate how a planner (or a robot as an embodiment of it) can explain its decisions to multiple agents in the loop together considering not only the model that it used to come up with its decisions but also the (often misaligned) models of the same task that the other agents might have had. To do this, we build on our previous work on multi-model explanation generation and extend it to account for settings where there is uncertainty of the robot's model of the explainee and/or there are multiple explainees with different models to explain to. We will illustrate these concepts in a demonstration on a robot involved in a typical search and reconnaissance scenario with another human teammate and an external human supervisor.
Mr. Jones — Towards a Proactive Smart Room Orchestrator
Chakraborti, Tathagata (Arizona State University) | Talamadupula, Kartik (IBM T. J. Watson Research Center) | Dholakia, Mishal (IBM T. J. Watson Research Center) | Srivastava, Biplav (IBM T. J. Watson Research Center) | Kephart, Jeffrey O. (IBM T. J. Watson Research Center) | Bellamy, Rachel K. E. (IBM T. J. Watson Research Center)
In this brief abstract we report work in progress on developing Mr.Jones — a proactive orchestrator and decision support agent for a collaborative decision making setting embodied by a smart room. The duties of such an agent may range across interactive problem solving with other agents in the environment, developing automated summaries of meetings, visualization of the internal decision making process, proactive data and resource management, and so on. Specifically, we highlight the importance of integrating higher level symbolic reasoning and intent recognition in the design of such an agent, and outline pathways towards the realization of these capabilities.We will demonstrate some of these functionalities here in the context of automated orchestration of a meeting in the CEL — the Cognitive Environments Laboratory at IBM's T. J. Watson Research Center.
RADAR — A Proactive Decision Support System for Human-in-the-Loop Planning
Sengupta, Sailik (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Sreedharan, Sarath (Arizona State University) | Vadlamudi, Satya Gautam (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Proactive Decision Support (PDS) aims at improving the decision making experience of human decision makers by enhancing both the quality of the decisions and the ease of making them. In this paper, we ask the question what role automated decision-making technologies can play in the deliberative process of the human decision maker.Specifically, we focus on expert humans in the loop who now share a detailed, if not complete, model of the domain with the assistant, but may still be unable to compute plans due to cognitive overload. To this end, we propose a PDS framework RADAR based on research in the automated planning community that aids the human decision maker in constructing plans. We will situate our discussion on principles of interface design laid out in the literature on the degrees of automation and its effect on the collaborative decision-making process. Also, at the heart of our design is the principle of naturalistic decision making which has been shown to be a necessary requirement of such systems, thus focusing more on providing suggestions rather than enforcing decisions and executing actions. We will demonstrate the different properties of such a system through examples in a fire-fighting domain, where human commanders are involved in building response strategies to mitigate a fire outbreak.The paper is written to serve both as a position paper by motivating requirements of an effective proactive decision support system, and also an emerging application of these ideas in the context of the role of an automated planner in human decision making, in a platform that can prove to be a valuable test bed for research on the same.
Computational Analysis of Lexical and Cohesion Differences in Deceptive Language: The Role of Accordance
Heidari, Ali (Georgia State University) | D’Arienzo, Meredith (Georgia State University) | Crossley, Scott (Georgia State University) | Duran, Nicholas (Arizona State University)
In this study, two advanced computational text analysis tools were used to catalogue lexical and cohesive features of deceptive language and language accordance (i.e. agreement/disagreement on topic of conversation) in a corpus of dyadic conversations. The study specifically focused on how the variable of accordance conditions the process of deception in terms of lexical and cohesive features. The results indicated that there is no interaction between deception and accordance in deceptive conversations in terms of cohesive or lexical sophistication indices. The results also did not show main effect for indices of cohesion and lexical sophistication for deceptive versus non-deceptive conversations. However, main effects were observed for indices of cohesion and lexical sophistication in distinguishing conversations characterized by agreement or disagreement. The linguistic differences related to the cohesive and lexical sophistication aspects of agreement versus disagreement conversations are discussed.
Recurrence Quantification Analysis: A Technique for the Dynamical Analysis of Student Writing
Allen, Laura Kristen (Arizona State University) | Likens, Aaron D (Arizona State University) | McNamara, Danielle S (Arizona State University)
The current study examined the degree to which the quality and characteristics of students’ essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed using recurrence quantification analysis (RQA). Results of correlation and regression analyses revealed that the RQA indices were significantly related to the quality of students’ essays, at both holistic and sub-scale levels (e.g., organization, cohesion). Additionally, these indices were able to account for between 11% and 43% of the variance in students’ holistic and sub-scale essay scores. Overall, our results suggest that dynamic techniques can be used to improve natural language processing assessments of student essays.
A Logic Based Approach to Answering Questions about Alternatives in DIY Domains
Wang, Yi (Arizona State University) | Lee, Joohyung (Arizona State University) | Kim, Doo Soon (Bosch Research and Technology Center)
Many question answering systems have primarily focused on factoid questions. These systems require the answers to be explicitly stored in a knowledge base (KB) but due to this requirement, they fail to answer many questions for which the answers cannot be pre-formulated. This paper presents a question answering system which aims at answering non-factoid questions in the DIY domain using logic-based reasoning. Specifically, the system uses Answer Set Programming to derive an answer by combining various types of knowledge such as domain and commonsense knowledge. We showcase the system by answering one specific type of questions -- questions about alternatives. The evaluation result shows that our logic-based reasoning together with the KB (constructed from texts using Information Extraction) significantly improves the user experience.
UbuntuWorld 1.0 LTS — A Platform for Automated Problem Solving & Troubleshooting in the Ubuntu OS
Chakraborti, Tathagata (Arizona State University) | Talamadupula, Kartik (IBM T.J. Watson Research Center) | Fadnis, Kshitij P. (IBM T.J. Watson Research Center) | Campbell, Murray (IBM T.J. Watson Research Center) | Kambhampati, Subbarao (Arizona State University)
In this paper, we present UbuntuWorld 1.0 LTS - a platform for developing automated technical support agents in the Ubuntu operating system. Specifically, we propose to use the Bash terminal as a simulator of the Ubuntu environment for a learning-based agent and demonstrate the usefulness of adopting reinforcement learning (RL) techniques for basic problem solving and troubleshooting in this environment. We provide a plug-and-play interface to the simulator as a python package where different types of agents can be plugged in and evaluated, and provide pathways for integrating data from online support forums like Ask Ubuntu into an automated agent’s learning process. Finally, we show that the use of this data significantly improves the agent’s learning efficiency. We believe that this platform can be adopted as a real-world test bed for research on automated technical support.
Explainable Image Understanding Using Vision and Reasoning
Aditya, Somak (Arizona State University)
Image Understanding is fundamental to intelligent agents.Researchers have explored Caption Generation and VisualQuestion Answering as independent aspects of Image Understanding (Johnson et al. 2015; Xiong, Merity, and Socher2016). Common to most of the successful approaches, are the learning of end-to-end signal mapping (image-to-caption, image and question to answer). The accuracy is impressive. It is also important to explain a decision to end-user(justify the results, and rectify based on feedback). Very recently, there has been some focus (Hendricks et al. 2016;Liu et al. ) on explaining some aspects of the learning systems. In my research, I look towards building explainableImage Understanding systems that can be used to generate captions and answer questions. Humans learn both from examples (learning) and by reading (knowledge). Inspired by such an intuition, researchers have constructed Knowledge-Bases that encode (probabilistic) commonsense and background knowledge. In this work, we look towards efficiently using this probabilistic knowledge on top of machine learning capabilities, to rectify noise in visual detections and generate captions or answers to posed questions.
Finding Cut from the Same Cloth: Cross Network Link Recommendation via Joint Matrix Factorization
Nelakurthi, Arun Reddy (Arizona State University) | He, Jingrui (Arizona State University)
With the emergence of online forums associated with major diseases, such as diabetes mellitus, many patients are increasingly dependent on such disease-specific social networks to gain access to additional resources. Among these patients, it is common for them to stick to one disease-specific social network, although their desired resources might be spread over multiple social networks, such as patients with similar questions and concerns. Motivated by this application, in this paper, we focus on cross network link recommendation, which aims to identify similar users across multiple heterogeneous social networks. The problem setting is different from existing work on cross network link prediction, which either tries to link accounts of the same user from different social networks, or aims to match users with complementary expertise or interest. To approach the problem of cross network link recommendation, we propose to jointly decompose the user-keyword matrices from multiple social networks, while requiring them to share the same topics and user group-topic association matrices. This constraint comes from the fact that social networks dedicated to the same disease tend to share the same topics as well as the interests of users groups in certain topics. Based on this intuition, we construct a generic optimization framework, provide four instantiations and an iterative optimization algorithm with performance analysis. In the experiments, we demonstrate the superiority of the proposed algorithm over state-of-the-art techniques on various real-world data sets.