Goto

Collaborating Authors

 Thomaz, Andrea Lockerd


Novel Interaction Strategies for Learning from Teleoperation

AAAI Conferences

The field of robot Learning from Demonstration (LfD) makes use of several input modalities for demonstrations (teleoperation, kinesthetic teaching, marker- and vision-based motion tracking). In this paper we present two experiments aimed at identifying and overcoming challenges associated with using teleoperation as an input modality for LfD. Our first experiment compares kinesthetic teaching and teleoperation and highlights some inherent problems associated with teleoperation; specifically uncomfortable user interactions and inaccurate robot demonstrations. Our second experiment is focused on overcoming these problems and designing the teleoperation interaction to be more suitable for LfD. In previous work we have proposed a novel demonstration strategy using the concept of keyframes, where demonstrations are in the form of a discrete set of robot configurations. Keyframes can be naturally combined with continuous trajectory demonstrations to generate a hybrid strategy. We perform user studies to evaluate each of these demonstration strategies individually and show that keyframes are intuitive to the users and are particularly useful in providing noise-free demonstrations. We find that users prefer the hybrid strategy best for demonstrating tasks to a robot by teleoperation.


Learning Tasks and Skills Together From a Human Teacher

AAAI Conferences

Robot Learning from Demonstration (LfD) research deals with the challenges of enabling humans to teach robots novel skills and tasks (Argall et al. 2009). The practical importance of LfD is due to the fact that it is impossible to pre-program all the necessary skills and task knowledge that a robot might need during its life-cycle. This poses many interesting application areas for LfD ranging from houses to factory floors. An important motivation for our research agenda is that in many of the practical LfD applications, the teacher will be an everyday end-user, not an expert in Machine Learning or robotics. Thus, our research explores the ways in which Machine Learning can exploit human social learning interactions--Socially Guided Machine Learning (SGML).


Turn Taking for Human-Robot Interaction

AAAI Conferences

Applications in Human-Robot Interaction (HRI) in the not-so-distant future include robots that collaborate with factory workers or serve us as caregivers or waitstaff. When offering customized functionality in these dynamic environments, robots need to engage in real-time exchanges with humans. Robots thus need to be capable of participating in smooth turn-taking interactions. The research goal in HRI of unstructured dialogic interaction would allow communication with robots that is as natural as communication with other humans. Turn-taking is the framework that provides structure for human communication. Consciously or subconsciously, humans are able to communicate their understanding and control of the turn structure to a conversation partner by using syntax, semantics, paralinguistic cues, eye gaze, and body language in a socially intelligent way. Our research aims to show that by implementing these turn-taking cues within a interaction architecture that is designed fundamentally for turn-taking, a robot becomes easier and more efficient for a human to interact with. This paper outlines our approach and initial pilot study into this line of research.