We propose a semi-autonomous teleoperation framework, developed in (Lee & Spong 2005), as a means for robotic missions to establish infrastructure and preparations for the sustained presence of humans on the Moon. This semiautonomous framework consists of the two control loops: 1) local autonomous control and interagent communication on the Moon ensure secure cooperative manipulation of objects by the multiple slave robots regardless of communication delays and human commands; and 2) a bilateral teleoperation loop enabling a remote human operator (on the Earth, in lunar orbit, or on the Moon) to tele-control the grasped object via the delayed communication channels. This architecture will be useful for tasks requiring cooperative manipulation, such as construction of human habitats, assembly of solar photovoltaic panels, and cooperative handling of excavated rocks for in-situ resource utilization, to name a few. Simulation results are presented to highlight properties and capabilities of the proposed framework.
This paper presents an experimental analysis of the Human-Robot Interaction (HRI) between human operators and a Human-Initiative (HI) variable-autonomy mobile robot during navigation tasks. In our HI system the human operator is able to switch the Level of Autonomy (LOA) on-the-fly between teleoperation (joystick control) and autonomous control (robot navigates autonomously towards waypoints selected by the human). We present statistically-validated results on: the preferred LOA of human operators; the amount of time spent in each LOA; the frequency of human-initiated LOA switches; and human perceptions of task difficulty. We also investigate the correlation between these variables; their correlation with performance in the primary task (navigation of the robot); and their correlation with performance in a secondary task, in which humans are required to perform mental rotations of 3D objects, while simultaneously trying to continue with the primary task of driving the robot.
Let's say you've got a task that you want this robot to perform. It could be anything, from juggling 6 flaming chainsaws to helping a stroke victim do their exercises to baking an apple pie. It could be the key to your money-making robot company concept, or just a useful behavior to help you around the house. Whatever it is, inside your brain, latent in your mind, is a policy that, if transferred to the robot, would make it perform the task just the way you want it. The problem of getting the task from your mind to the robot is what we call Human-Robot Policy Transfer (HRPT).
Telerobotics has traditionally been human-centric. Since telerobotics evolved directly from other human controlled devices, this approach seems only natural. Whatever the system and regardless the operational model the paradigm has always been human-as-controller: the human receives information, processes it, and selects an action. The action then becomes the control input to the system. For telerobotics, however, this human-machine relationship often proves to be inefficient and ineffective.
The field of robot Learning from Demonstration (LfD) makes use of several input modalities for demonstrations (teleoperation, kinesthetic teaching, marker- and vision-based motion tracking). In this paper we present two experiments aimed at identifying and overcoming challenges associated with using teleoperation as an input modality for LfD. Our first experiment compares kinesthetic teaching and teleoperation and highlights some inherent problems associated with teleoperation; specifically uncomfortable user interactions and inaccurate robot demonstrations. Our second experiment is focused on overcoming these problems and designing the teleoperation interaction to be more suitable for LfD. In previous work we have proposed a novel demonstration strategy using the concept of keyframes, where demonstrations are in the form of a discrete set of robot configurations. Keyframes can be naturally combined with continuous trajectory demonstrations to generate a hybrid strategy. We perform user studies to evaluate each of these demonstration strategies individually and show that keyframes are intuitive to the users and are particularly useful in providing noise-free demonstrations. We find that users prefer the hybrid strategy best for demonstrating tasks to a robot by teleoperation.