Robots


Exploring Affordances Using Human-Guidance and Self-Exploration

AAAI Conferences

Our work is aimed at service robots deployed in human environments that will need many specialized object manipulation skill. We believe robots should leverage end-users to quickly and efficiently learn the affordances of objects in their environment. Prior work has shown that this approach is promising because people naturally focus on showing salient rare aspects ofthe objects (Thomaz and Cakmak 2009). We replicate these prior results and build on them to create a semi-supervised combination of self and guided learning.We compare three conditions: (1) learning through self-exploration, (2) learning from demonstrations providedby 10 naive users, and (3) self-exploration seeded with the user demonstrations. Initial results suggests benefits of a mixed initiative approach.


A Visual Analogy Approach to Source Case Retrieval in Robot Learning from Observation

AAAI Conferences

Learning by observation is an important goal in developing complete intelligent robots that learn interactively. We present a visual analogy approach toward an integrated, intelligent system capable of learning skills from observation. In particular, we focus on the task of retrieving a previously acquired case similar to a new, observed skill. We describe three approaches to case retrieval: feature matching, feature transformation, and fractal analogy. SIFT features and fractal encoding were used to represent the visual state prior to the skill demonstration, the final state after the skill has been executed, and the visual transformation between the two states. We discovered that the three methods (feature matching, feature transformation, and fractal analogy) are useful for retrieval of similar skill cases under different conditions pertaining to the observed skills.


The AAAI 2011 Robot Exhibition

AI Magazine

In this article we report on the exhibits and challenges shown at the AAAI 2011 Robotics Program in San Francisco. The event included a broad demonstration of innovative research at the intersection of robotics and artificial intelligence. Through these multi-year challenge events, our goal has been to focus the research community's energy toward common platforms and common problems to work toward the greater goal of embodied AI.


The AAAI 2011 Robot Exhibition

AI Magazine

In this article we report on the exhibits and challenges shown at the AAAI 2011 Robotics Program in San Francisco. The event included a broad demonstration of innovative research at the intersection of robotics and artificial intelligence. Through these multi-year challenge events, our goal has been to focus the research community’s energy toward common platforms and common problems to work toward the greater goal of embodied AI.


Turn-Taking Based on Information Flow for Fluent Human-Robot Interaction

AI Magazine

Turn-taking is a fundamental part of human communication. Our goal is to devise a turn-taking framework for human-robot interaction that, like the human skill, represents something fundamental about interaction, generic to context or domain. We propose a model of turn-taking, and conduct an experiment with human subjects to inform this model. Our findings from this study suggest that information flow is an integral part of human floor-passing behavior.


Turn-Taking Based on Information Flow for Fluent Human-Robot Interaction

AI Magazine

Turn-taking is a fundamental part of human communication. Our goal is to devise a turn-taking framework for human-robot interaction that, like the human skill, represents something fundamental about interaction, generic to context or domain. We propose a model of turn-taking, and conduct an experiment with human subjects to inform this model. Our findings from this study suggest that information flow is an integral part of human floor-passing behavior. Following this, we implement autonomous floor relinquishing on a robot and discuss our insights into the nature of a general turn-taking model for human-robot interaction.


Report on the AAAI 2010 Robot Exhibition

AI Magazine

The 19th robotics program at the annual AAAI conference was held in Atlanta, Georgia in July 2010. In this article we give a summary of three components of the exhibition: small scale manipulation challenge: robotic chess; the learning by demonstration challenge, and the education track. We also describe the participating teams, highlight the research questions they tackled and briefly describe the systems they demonstrated.


Report on the AAAI 2010 Robot Exhibition

AI Magazine

The 19th robotics program at the annual AAAI conference was held in Atlanta, Georgia in July 2010. In this article we give a summary of three components of the exhibition: small scale manipulation challenge: robotic chess; the learning by demonstration challenge, and the education track. In each section we detail the challenge task. We also describe the participating teams, highlight the research questions they tackled and briefly describe the systems they demonstrated.


Enabling Intelligence through Middleware: Report of the AAAI 2010 Workshop

AI Magazine

The AAAI 2010 Workshop on Enabling Intelligence through Middleware (held during the Twenty-Fourth AAAI Conference on Artificial Intelligence) focused on the issues and opportunities inherent in the robotics middleware packages that we use. The workshop consisted of three invited speakers and six middleware research presenters. This report presents the highlights of that discussion and the packages presented.


Joint Attention in Human-Robot Interaction

AAAI Conferences

We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This model is supported by psychological findings and matches the developmental timeline in humans. We present two experiments that test this model and investigate joint attention in human-robot interaction. The first experiment explored the effects of responding to joint attention on human-robot interaction. We show that robots responding to joint attention are more transparent to humans and are more competent and socially interactive. The second experiment studied the importance of ensuring joint attention in human-robot interaction. Data upheld our hypotheses that a robot's ensuring joint attention behavior yields better performance in human-robot interactive tasks and ensuring joint attention behaviors are perceived as natural behaviors.