Chernova, Sonia


Classification of Household Materials via Spectroscopy

arXiv.org Machine Learning

Recognizing an object's material can inform a robot on how hard it may grasp the object during manipulation, or if the object may be safely heated up. To estimate an object's material during manipulation, many prior works have explored the use of haptic sensing. In this paper, we explore a technique for robots to estimate the materials of objects using spectroscopy. We demonstrate that spectrometers provide several benefits for material recognition, including fast sensing times and accurate measurements with low noise. Furthermore, spectrometers do not require direct contact with an object. To illustrate this, we collected a dataset of spectral measurements from two commercially available spectrometers during which a robotic platform interacted with 50 distinct objects, and we show that a residual neural network can accurately analyze these measurements. Due to the low variance in consecutive spectral measurements, our model achieved a material classification accuracy of 97.7% when given only one spectral sample per object. Similar to prior works with haptic sensors, we found that generalizing material recognition to new objects posed a greater challenge, for which we achieved an accuracy of 81.4% via leave-one-object-out cross-validation. From this work, we find that spectroscopy poses a promising approach for further research in material classification during robotic manipulation.


Action Categorization for Computationally Improved Task Learning and Planning

arXiv.org Artificial Intelligence

This paper explores the problem of task learning and planning, contributing the Action-Category Representation (ACR) to improve computational performance of both Planning and Reinforcement Learning (RL). ACR is an algorithm-agnostic, abstract data representation that maps objects to action categories (groups of actions), inspired by the psychological concept of action codes. We validate our approach in StarCraft and Lightworld domains; our results demonstrate several benefits of ACR relating to improved computational performance of planning and RL, by reducing the action space for the agent.


SiRoK: Situated Robot Knowledge - Understanding the Balance Between Situated Knowledge and Variability

AAAI Conferences

General-purpose robots operating in a variety of environments, such as homes or hospitals, require a way to integrate abstract knowledge that is generalizable across domains with local, domain-specific observations. In this work, we examine different types and sources of data, with the goal of understanding how locally observed data and abstract knowledge might be fused.We introduce the Situated Robot Knowledge (SiRoK) framework that integrates probabilistic abstract knowledge and semantic memory of the local environment. In a series of robot and simulation experiments we examine the tradeoffs in the reliability and generalization of both data sources. Our robot experiments show that the variability of object properties and locations in our knowledge base is indicative of the time it takes to generalize a concept and its validity in the real world. The results of our simulations back that of our robot experiments, and give us insights into which source of knowledge to use for 31 types of object classes that exist in the real world.


Semi-Supervised Haptic Material Recognition for Robots using Generative Adversarial Networks

arXiv.org Machine Learning

Material recognition enables robots to incorporate knowledge of material properties into their interactions with everyday objects. For example, material recognition opens up opportunities for clearer communication with a robot, such as "bring me the metal coffee mug", and recognizing plastic versus metal is crucial when using a microwave or oven. However, collecting labeled training data with a robot is often more difficult than unlabeled data. We present a semi-supervised learning approach for material recognition that uses generative adversarial networks (GANs) with haptic features such as force, temperature, and vibration. Our approach achieves state-of-the-art results and enables a robot to estimate the material class of household objects with ~90% accuracy when 92% of the training data are unlabeled. We explore how well this approach can recognize the material of new objects and we discuss challenges facing generalization. To motivate learning from unlabeled training data, we also compare results against several common supervised learning classifiers. In addition, we have released the dataset used for this work which consists of time-series haptic measurements from a robot that conducted thousands of interactions with 72 household objects.


An HRI Approach to Feature Selection

AAAI Conferences

Our research seeks to enable social robots to ask intelligent questions when learning tasks from human teachers. We use the paradigm of Learning from Demonstration (LfD) to address the problem of efficient learning of task policies by example (Chernova and Thomaz 2014). In this work, we explore how to leverage human domain knowledge for task model construction, by allowing users to directly select a set of the salient features for classification of objects used in the task being demonstrated.


Towards Robot Adaptability in New Situations

AAAI Conferences

We present a system that integrates robot task execution with user input and feedback at multiple abstraction levels in order to achieve greater adaptability in new environments. The user can specify a hierarchical task, with the system interactively proposing logical action groupings within the task. During execution, if tasks fail because objects specified in the initial task description are not found in the environment, the robot proposes substitutions autonomously in order to repair the plan and resume execution. The user can assist the robot by reviewing substitutions. Finally, the user can train the robot to recognize and manipulate novel objects, either during training or during execution. In addition to this single-user scenario, we propose extensions that leverage crowdsourced input to reduce the need for direct user feedback.


Reports on the 2014 AAAI Fall Symposium Series

AI Magazine

The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.


Reports on the 2014 AAAI Fall Symposium Series

AI Magazine

The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.


Reinforcement Learning from Demonstration through Shaping

AAAI Conferences

Reinforcement learning describes how a learning agent can achieve optimal behaviour based on interactions with its environment and reward feedback. A limiting factor in reinforcement learning as employed in artificial intelligence is the need for an often prohibitively large number of environment samples before the agent reaches a desirable level of performance. Learning from demonstration is an approach that provides the agent with demonstrations by a supposed expert, from which it should derive suitable behaviour. Yet, one of the challenges of learning from demonstration is that no guarantees can be provided for the quality of the demonstrations, and thus the learned behavior. In this paper, we investigate the intersection of these two approaches, leveraging the theoretical guarantees provided by reinforcement learning, and using expert demonstrations to speed up this learning by biasing exploration through a process called reward shaping. This approach allows us to leverage human input without making an erroneous assumption regarding demonstration optimality. We show experimentally that this approach requires significantly fewer demonstrations, is more robust against suboptimality of demonstrations, and achieves much faster learning than the recently developed HAT algorithm.


Solving and Explaining Analogy Questions Using Semantic Networks

AAAI Conferences

Analogies are a fundamental human reasoning pattern that relies on relational similarity. Understanding how analogies are formed facilitates the transfer of knowledge between contexts. The approach presented in this work focuses on obtaining precise interpretations of analogies. We leverage noisy semantic networks to answer and explain a wide spectrum of analogy questions. The core of our contribution, the Semantic Similarity Engine, consists of methods for extracting and comparing graph-contexts that reveal the relational parallelism that analogies are based on, while mitigating uncertainty in the semantic network.We demonstrate these methods in two tasks: answering multiple choice analogy questions and generating human readable analogy explanations. We evaluate our approach on two datasets totaling 600 analogy questions. Our results show reliable performance and low false-positive rate in question answering; human evaluators agreed with 96% of our analogy explanations.