Arizona State University
Embedding Directed Graphs in Potential Fields Using FastMap-D
Gopalakrishnan, Sriram (Arizona State University) | Cohen, Liron (University of Southern California) | Koenig, Sven (University of Southern California) | Kumar, T. K. Satish (University of Southern California)
Embedding undirected graphs in a Euclidean space has many computational benefits. FastMap is an efficient embedding algorithm that facilitates a geometric interpretation of problems posed on undirected graphs. However, Euclidean distances are inherently symmetric and, thus, Euclidean embeddings cannot be used for directed graphs. In this paper, we present FastMap-D, an efficient generalization of FastMap to directed graphs. FastMap-D embeds vertices using a potential field to capture the asymmetry between the to-and-fro pairwise distances in directed graphs. FastMap-D learns a potential function to define the potential field using a machine learning module. In experiments on various kinds of directed graphs, we demonstrate the advantage of FastMap-D over other approaches.
The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction
Williams, Tom (Colorado School of Mines) | Szafir, Daniel (University of Colorado Boulder) | Chakraborti, Tathagata (Arizona State University) | Amor, Heni Ben (Arizona State University)
The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) was held in 2018 in conjunction with the 13th International Conference on Human-Robot Interaction, and brought together researchers from the fields of Human-Robot Interaction (HRI), Robotics, Artificial Intelligence, and Virtual, Augmented, and Mixed Reality in order to identify challenges in mixed reality interactions between humans and robots. This inaugural workshop featured a keynote talk from Blair MacIntyre (Mozilla, Georgia Tech), a panel discussion, and twenty-nine papers presented as lightning talks and/or posters. In this report, we briefly survey the papers presented at the workshop and outline some potential directions for the community.
Reports on the 2018 AAAI Spring Symposium Series
Amato, Christopher (Northeastern University) | Ammar, Haitham Bou (PROWLER.io) | Churchill, Elizabeth (Google) | Karpas, Erez (Technion - Israel Institute of Technology) | Kido, Takashi (Stanford University) | Kuniavsky, Mike (Parc) | Lawless, W. F. (Paine College) | Rossi, Francesca (IBM T. J. Watson Research Center and University of Padova) | Oliehoek, Frans A. (TU Delft) | Russell, Stephen (US Army Research Laboratory) | Takadama, Keiki (University of Electro-Communications) | Srivastava, Siddharth (Arizona State University) | Tuyls, Karl (Google DeepMind) | Allen, Philip Van (Art Center College of Design) | Venable, K. Brent (Tulane University and IHMC) | Vrancx, Peter (PROWLER.io) | Zhang, Shiqi (Cleveland State University)
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford Universityโs Department of Computer Science, presented the 2018 Spring Symposium Series, held Monday through Wednesday, March 26โ28, 2018, on the campus of Stanford University. The seven symposia held were AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents; Artificial Intelligence for the Internet of Everything; Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity for Well-Being AI; Data Efficient Reinforcement Learning; The Design of the User Experience for Artificial Intelligence (the UX of AI); Integrated Representation, Reasoning, and Learning in Robotics; Learning, Inference, and Control of Multi-Agent Systems. This report, compiled from organizers of the symposia, summarizes the research of five of the symposia that took place.
Learning Generalized Reactive Policies Using Deep Neural Networks
Groshev, Edward (University of California, Berkeley) | Goldstein, Maxwell (Princeton University) | Tamar, Aviv (University of California, Berkeley) | Srivastava, Siddharth (Arizona State University) | Abbeel, Pieter (University of California, Berkeley)
We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a generalized reactive policy (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input.
Tweeting AI: Perceptions of Lay versus Expert Twitterati
Manikonda, Lydia (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
In light of the significant public interest in the AI technology and its impacts, in this research we set out to analyze the contours of public discourse and perceptions of AI, as reflected in the social media. We focus on Twitter, and analyze over two million AI related tweets posted by over 40,000 users. In addition to analyzing the macro characteristics of this whole discourse in terms of demographics, sentiment, and topics, we also provide a differential analysis of tweets from experts vs. non-experts, as well as a differential analysis of male vs. female tweeters. We see that (i) by and large the sentiments expressed in the AI discourse are more positive than is par for twitter (ii) that lay public tend to be more positive about AI than expert tweeters and (iii) that women tend to be more positive about AI impacts than men. Analysis of topics discussed also shows interesting differential patterns across experts vs. non-experts and men vs. women. For example, we see that women tend to focus a lot more on the ethical issues surrounding AI. Our analysis provides a far more nuanced picture of the public discourse on AI.
Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation
Sreedharan, Sarath (Arizona State University) | Chakraborti, Tathagata๏ปฟ (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences.However, often the human's mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed.In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating {\em conformant explanations} that are applicable to a set of possible models.We also show how such explanations can contain superfluous informationand how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human.We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.
Comparing Machine Learning Classification Approaches for Predicting Expository Text Difficulty
Balyan, Renu (Arizona State University) | McCarthy, Kathryn S. (Arizona State University) | McNamara, Danielle S. (Arizona State University)
While hierarchical machine learning approaches have been used to classify texts into different content areas, this approach has, to our knowledge, not been used in the automated assessment of text difficulty. This study compared the accuracy of four classification machine learning approaches (flat, one-vs-one, one-vs-all, and hierarchical) using natural language processing features in predicting human ratings of text difficulty for two sets of texts. The hierarchical classification was the most accurate for the two text sets considered individually (Set A, 77.78%; Set B, 82.05%), while the non-hierarchical approaches, one-vs-one and one-vs-all, performed similar to the hierarchical classification for the combined set (71.43%). These findings suggest both promise and limitations for applying hierarchical approaches to text difficulty classification. It may be beneficial to apply a recursive top-down approach to discriminate the subsets of classes that are at the top of the hierarchy and less related, and then further separate the classes into subsets that may be more similar to one other. These results also suggest that a single approach may not always work for all types of datasets and that it is important to evaluate which machine learning approach and algorithm works best for particular datasets. The authors encourage more work in this area to help suggest which types of algorithms work best as a func-tion of the type of dataset.
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Sengupta, Sailik (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Recent works on gradient-based attacks and universal perturbations can adversarially modify images to bring down the accuracy of state-of-the-art classification techniques based on deep neural networks to as low as 10% on popular datasets like MNIST and ImageNet. The design of general defense strategies against a wide range of such attacks remains a challenging problem. In this paper, we derive inspiration from recent advances in the fields of cybersecurity and multi-agent systems and propose to use the concept of Moving Target Defense (MTD) for increasing the robustness of a set of deep networks against such adversarial attacks.ย To this end, we formalize and exploit the notion of differential immunity of an ensemble of networks to specific attacks.ย To classify an input image, a trained network is picked from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) Users as a repeated Bayesian Stackelberg Game (BSG).We empirically show that our approach, MTDeep reduces misclassification on perturbed images for MNIST and ImageNet datasets while maintaining high classification accuracy on legitimate test images.ย Lastly, we demonstrate that our framework can be used in conjunction with any existing defense mechanism to provide more resilience to adversarial attacks than those defense mechanisms by themselves.
User Interfaces and Scheduling and Planning: Workshop Summary and Proposed Challenges
Freedman, Richard G. (University of Massachusetts Amherst) | Chakraborti, Tathagata (Arizona State University) | Talamadupula, Kartik (IBM Research) | Magazzeni, Daniele (King's College London) | Frank, Jeremy D. (NASA Ames Research Center)
The User Interfaces and Scheduling and Planning (UISP) Workshop had its inaugural meeting at the 2017 International Conference on Automated Scheduling and Planning (ICAPS). The UISP community focuses on bridging the gap between automated planning and scheduling technologies and user interface (UI) technologies. Planning and scheduling systems need UIs, and UIs can be designed and built using planning and scheduling systems. The workshop participants included representatives from government organizations, industry, and academia with various insights and novel challenges. We summarize the discussions from the workshop as well as outline challenges related to this area of research, introducing the now formally established field to the broader user experience and artificial intelligence communities.
Safe Goal-Directed Autonomy and the Need for Sound Abstractions
Srivastava, Siddharth (Arizona State University)
The field of sequential decision making (SDM) captures a range of mathematical frameworks geared towards the synthesis of goal-directed behaviors for autonomous systems. Abstract benchmark problems such as the blocks-world domain have facilitated immense progress in solution algorithms for SDM. there is some evidence that a direct application of SDM algorithms in real-world situations can produce unsafe behaviors. This is particularly apparent in task and motion planning in robotics. We believe that the reliability of today's SDM algorithms is limited because SDM models, such as the blocks-world domain, are unsound abstractions (those that yield false inferences) of real world situations. This position paper presents the case for a focused research effort towards the study of sound abstractions of models for SDM and algorithms for efficiently computing safe goal-directed behavior using such abstractions.