An HRI Approach to Learning from Demonstration

AAAI Conferences

The goal of this research is to enable robots to learn new things from everyday people. For years, the AI and Robotics community has sought to enable robots to efficiently learn new skills from a knowledgeable human trainer, and prior work has focused on several important technical problems. This vast amount of research in the field of robot Learning by Demonstration has by and large only been evaluated with expert humans, typically the system's designer. Thus, neglecting a key point that this interaction takes place within a social structure that can guide and constrain the learning problem. %Moreover, we We believe that addressing this point will be essential for developing systems that can learn from everyday people that are not experts in Machine Learning or Robotics. Our work focuses on new research questions involved in letting robots learn from everyday human partners (e.g., What kind of input do people want to provide a machine learner? How does their mental model of the learning process affect this input? What interfaces and interaction mechanisms can help people provide better input from a machine learning perspective?) Often our research begins with an investigation into the feasibility of a particular machine learning interaction, which leads to a series of research questions around re-designing both the interaction and the algorithm to better suit learning with end-users. We believe this equal focus on both the Machine Learning and the HRI contributions are key to making progress toward the goal of machines learning from humans. In this abstract we briefly overview four different projects that highlight our HRI approach to the problem of Learning from Demonstration.


Novel Interaction Strategies for Learning from Teleoperation

AAAI Conferences

The field of robot Learning from Demonstration (LfD) makes use of several input modalities for demonstrations (teleoperation, kinesthetic teaching, marker- and vision-based motion tracking). In this paper we present two experiments aimed at identifying and overcoming challenges associated with using teleoperation as an input modality for LfD. Our first experiment compares kinesthetic teaching and teleoperation and highlights some inherent problems associated with teleoperation; specifically uncomfortable user interactions and inaccurate robot demonstrations. Our second experiment is focused on overcoming these problems and designing the teleoperation interaction to be more suitable for LfD. In previous work we have proposed a novel demonstration strategy using the concept of keyframes, where demonstrations are in the form of a discrete set of robot configurations. Keyframes can be naturally combined with continuous trajectory demonstrations to generate a hybrid strategy. We perform user studies to evaluate each of these demonstration strategies individually and show that keyframes are intuitive to the users and are particularly useful in providing noise-free demonstrations. We find that users prefer the hybrid strategy best for demonstrating tasks to a robot by teleoperation.


Learning Tasks and Skills Together From a Human Teacher

AAAI Conferences

We are interested in developing Learning from Demonstration (LfD) systems that are tailored to be used by everyday people. We highlight and tackle the issues of skill learning, task learning and interaction in the context of LfD As part of the AAAI 2011 LfD Challenge, we will demonstrate some of our most recent Socially Guided-Machine Learning work, in which the PR2 robot learns both low-level skills and high-level tasks through an ongoing social dialog with a human partner


A Visual Analogy Approach to Source Case Retrieval in Robot Learning from Observation

AAAI Conferences

Learning by observation is an important goal in developing complete intelligent robots that learn interactively. We present a visual analogy approach toward an integrated, intelligent system capable of learning skills from observation. In particular, we focus on the task of retrieving a previously acquired case similar to a new, observed skill. We describe three approaches to case retrieval: feature matching, feature transformation, and fractal analogy. SIFT features and fractal encoding were used to represent the visual state prior to the skill demonstration, the final state after the skill has been executed, and the visual transformation between the two states. We discovered that the three methods (feature matching, feature transformation, and fractal analogy) are useful for retrieval of similar skill cases under different conditions pertaining to the observed skills.


Online Development of Assistive Robot Behaviors for Collaborative Manipulation and Human-Robot Teamwork

AAAI Conferences

Collaborative robots that operate in the same immediate environment as human workers have the potential to improve their co-workers' efficiency and quality of work. In this paper we present a taxonomy of assistive behavior types alongside methods that enable a robot to learn assistive behaviors from interactions with a human collaborator during live activity completion. We begin with a brief survey of the state of the art in human-robot collaboration. We proceed to focus on the challenges and issues surrounding the online development of assistive robot behaviors. Finally, we describe approaches for learning when and how to apply these behaviors, as well as for integrating them into a full end-to-end system utilizing techniques derived from the learning from demonstration, policy iteration, and task network communities.