An HRI Approach to Learning from Demonstration

AAAI Conferences

The goal of this research is to enable robots to learn new things from everyday people. For years, the AI and Robotics community has sought to enable robots to efficiently learn new skills from a knowledgeable human trainer, and prior work has focused on several important technical problems. This vast amount of research in the field of robot Learning by Demonstration has by and large only been evaluated with expert humans, typically the system's designer. Thus, neglecting a key point that this interaction takes place within a social structure that can guide and constrain the learning problem. %Moreover, we We believe that addressing this point will be essential for developing systems that can learn from everyday people that are not experts in Machine Learning or Robotics. Our work focuses on new research questions involved in letting robots learn from everyday human partners (e.g., What kind of input do people want to provide a machine learner? How does their mental model of the learning process affect this input? What interfaces and interaction mechanisms can help people provide better input from a machine learning perspective?) Often our research begins with an investigation into the feasibility of a particular machine learning interaction, which leads to a series of research questions around re-designing both the interaction and the algorithm to better suit learning with end-users. We believe this equal focus on both the Machine Learning and the HRI contributions are key to making progress toward the goal of machines learning from humans. In this abstract we briefly overview four different projects that highlight our HRI approach to the problem of Learning from Demonstration.

Learning at the Ends: From Hand to Tool Affordances in Humanoid Robots Machine Learning

One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i.e., the ability to perceive object affordances. Since modeling all the possible world interactions is unfeasible, learning from experience is required, posing the challenge of collecting a large amount of experiences (i.e., training data). Typically, a manipulative robot operates on external objects by using its own hands (or similar end-effectors), but in some cases the use of tools may be desirable, nevertheless, it is reasonable to assume that while a robot can collect many sensorimotor experiences using its own hands, this cannot happen for all possible human-made tools. Therefore, in this paper we investigate the developmental transition from hand to tool affordances: what sensorimotor skills that a robot has acquired with its bare hands can be employed for tool use? By employing a visual and motor imagination mechanism to represent different hand postures compactly, we propose a probabilistic model to learn hand affordances, and we show how this model can generalize to estimate the affordances of previously unseen tools, ultimately supporting planning, decision-making and tool selection tasks in humanoid robots. We present experimental results with the iCub humanoid robot, and we publicly release the collected sensorimotor data in the form of a hand posture affordances dataset.

Hierarchical Skills and Skill-based Representation

AAAI Conferences

Autonomous robots demand complex behavior to deal with unstructured environments. To meet these expectations, a robot needs to address a suite of problems associated with long term knowledge acquisition, representation, and execution in the presence of partial information. In this paper, we address these issues by the acquisition of broad, domain general skills using an intrinsically motivated reward function. We show how these skills can be represented compactly and used hierarchically to obtain complex manipulation skills. We further present a Bayesian model using the learned skills to model objects in the world, in terms of the actions they afford. We argue that our knowledge representation allows a robot to both predict the dynamics of objects in the world as well as recognize them.

Representation, Use, and Acquisition of Affordances in Cognitive Systems

AAAI Conferences

We review the psychological notion of affordances and examine it anew from a cognitive systems perspective. We distinguish between environmental affordances and their internal representation, choosing to focus on the latter. We consider issues that arise in representing mental affordances, using them to understand and generate plans, and learning them from experience. In each case, we present theoretical claims that, together, form an incipient theory of affordance in cognitive systems. We close by noting related research and proposing directions for future work in this arena.

Affordances as Transferable Knowledge for Planning Agents

AAAI Conferences

Robotic agents often map perceptual input to simplified representations that do not reflect the complexity and richness of the world. This simplification is due in large part to the limitations of planning algorithms, which fail in large stochastic state spaces on account of the well-known "curse of dimensionality." Existing approaches to address this problem fail to prevent autonomous agents from considering many actions which would be obviously irrelevant to a human solving the same problem. We formalize the notion of affordances as knowledge added to an Markov Decision Process (MDP) that prunes actions in a state- and reward- general way. This pruning significantly reduces the number of state-action pairs the agent needs to evaluate in order to act near-optimally. We demonstrate our approach in the Minecraft domain as a model for robotic tasks, showing significant increase in speed and reduction in state-space exploration during planning. Further, we provide a learning framework that enables an agent to learn affordances through experience, opening the door for agents to learn to adapt and plan through new situations. We provide preliminary results indicating that the learning process effectively produces affordances that help solve an MDP faster, suggesting that affordances serve as an effective, transferable piece of knowledge for planning agents in large state spaces.