For tasks that need to be accomplished in unconstrained environments, as in the case of Urban Search and Rescue (USAR), human-robot collaboration is considered as an indispensable component. Collaboration is based on accurate models of robot and human perception consistent with one another, so that exchange of information critical to the accomplishment of a task is performed efficiently and in a simplified fashion to minimize the interaction overhead. In this paper, we highlight the features of a human-robot team, i.e. how robot perception may be combined with human perception based on a task-driven direction for USAR. We elaborate on the design of the components of a mixed-initiative system wherein a task assigned to the robot is planned and executed jointly with the human operator as a result of their interaction. Our description is solidified by demonstrating the application of mixed-initiative planning in a number of examples related to the morphological adaptation of the rescue robot.
This paper presents an experimental analysis of the Human-Robot Interaction (HRI) between human operators and a Human-Initiative (HI) variable-autonomy mobile robot during navigation tasks. In our HI system the human operator is able to switch the Level of Autonomy (LOA) on-the-fly between teleoperation (joystick control) and autonomous control (robot navigates autonomously towards waypoints selected by the human). We present statistically-validated results on: the preferred LOA of human operators; the amount of time spent in each LOA; the frequency of human-initiated LOA switches; and human perceptions of task difficulty. We also investigate the correlation between these variables; their correlation with performance in the primary task (navigation of the robot); and their correlation with performance in a secondary task, in which humans are required to perform mental rotations of 3D objects, while simultaneously trying to continue with the primary task of driving the robot.
Designing interactive mobile robots is a multidisciplinary endeavor that profits from having people interact with robots in different contexts, observing the effects and impacts. To do so, two main issues must be addressed: integrating perceptual and decision-making capabilities in order to interact in meaningful and efficient ways with people, and the ability to move in human settings. This paper describes four robotic platforms demonstrated at the AAAI 2005 Robot Competition, each addressing these issues in their own ways.
"Aurally Informed Performance' for mobile robots operating in natural environments brings difficult challenges, such as: localizing sound sources all around the robot; tracking these sources as they or the robot move; separate the sources as a pre-processing step for recognition and processing, in real-time; dialogue management and interaction in crowded conditions; evaluating performances of the different processing components in open conditions. In this paper, we present how we address these challenges by describing our eight microphone system for sound source localization, tracking and separation, our ongoing work on its DSP implementation, and the use of the system on Spartacus, our mobile robot entry to AAAI Mobile Robot Competitions addressing human-robot interaction in open settings.
Alers, Sjriek (Maastricht University) | Bloembergen, Daan (Maastricht University) | Claes, Daniel (Maastricht University) | Fossel, Joscha (Maastricht University) | Hennes, Daniel (Maastricht University) | Tuyls, Karl (Maastricht University)
Recently, various commercial telepresence robots have become available to the broader public. Here, we present the telepresence domain as a research platform for (re-)integrating AI. With MITRO: Maastricht Intelligent Telepresence RObot, we built a low-cost working prototype of a robot system specifically designed for augmented and autonomous telepresence. Telepresence robots can be deployed in a wide range of application domains, and augmented presence with assisted control can greatly improve the experience for the user. The research domains that we are focusing on are human robot interaction, navigation and perception.