Developing collaborative robots that can productively operate out of isolation and work safely in uninstrumented, human-populated environments is critically important for advancing the field of robotics. Especially in domains where modern robots are ineffective, we wish to leverage human-robot teaming to improve the efficiency, ability, and safety of human workers. Our work, outlined in this extended abstract, focuses on creating agents capable of human-robot teamwork by leveraging learning from demonstration, hierarchical task networks, multi-agent planning and state estimation, and intention recognition. We briefly describe our recent work within human-robot collaboration, including task comprehension, learning and performing assistive behaviors, and training novice human collaborators to become competent co-workers.
The variability of human behavior during plan execution poses a difficult challenge for human-robot teams. In this paper, we use the concepts of theory of mind to enable robots to account for two sources of human variability during team operation. When faced with an unexpected action by a human teammate, a robot uses a simulation analysis of different hypothetical cognitive models of the human to identify the most likely cause for the human's behavior. This allows the cognitive robot to account for variances due to both different knowledge and beliefs about the world, as well as different possible paths the human could take with a given set of knowledge and beliefs. An experiment showed that cognitive robots equipped with this functionality are viewed as both more natural and intelligent teammates, compared to both robots who either say nothing when presented with human variability, and robots who simply point out any discrepancies between the human's expected, and actual, behavior. Overall, this analysis leads to an effective, general approach for determining what thought process is leading to a human's actions.
The addition of a robot to a team can be difficult if the human teammates do not trust the robot. This can result in underutilization or disuse of the robot, even if the robot has skills or abilities that are necessary to achieve team goals or reduce risk. To help a robot integrate itself with a human team, we present an agent algorithm that allows a robot to estimate its trustworthiness and adapt its behavior accordingly. As behavior adaptation is performed, using case-based reasoning (CBR), information about the adaptation process is stored and used to improve the efficiency of future adaptations.
Human-robot teaming offers great potential because of the opportunities to combine strengths of heterogeneous agents. However, one of the critical challenges in realizing an effective human-robot team is efficient information exchange - both from the human to the robot as well as from the robot to the human. In this work, we present and analyze an augmented reality-enabled, gesture-based system that supports intuitive human-robot teaming through improved information exchange. Our proposed system requires no external instrumentation aside from human-wearable devices and shows promise of real-world applicability for service-oriented missions. Additionally, we present preliminary results from a pilot study with human participants, and highlight lessons learned and open research questions that may help direct future development, fielding, and experimentation of autonomous HRI systems.