Human Robot Interactions: Creating Synergistic Cyber Forces

AAAI Conferences

Human-robot interaction (HRI) for mobile robots is still in its infancy. Most user interactions with robots have been limited to tele-operation capabilities where the most common interface provided to the user has been the video feed from the robotic platform and some way of directing the path of the robot. For mobile robots with semiautonomous capabilities, the user is also provided with a means of setting way points. More importantly, most HRI capabilities have been developed by robotics experts for use by robotics experts. As robots increase in capabilities and are able to perform more tasks in an autonomous manner we need to think about the interactions that humans will have with robots and what software architecture and user interface designs can accommodate the human in-the-loop. We also need to design systems that can be used by domain experts but not robotics experts. This paper outlines a theory of human-robot interaction and proposes the interactions and information needed by both humans and robots for the different levels of interaction, including an evaluation methodology based on situational awareness.


Building Appropriate Trust in Human-Robot Teams

AAAI Conferences

Future robotic systems are expected to transition from tools to teammates , characterized by increasingly autonomous, intelligent robots interacting with humans in a more naturalistic manner, approaching a relationship more akin to human–human teamwork. Given the impact of trust observed in other systems, trust in the robot team member will likely be critical to effective and safe performance. Our thesis for this paper is that trust in a robot team member must be appropriately calibrated rather than simply maximized.  We describe how the human team member’s understanding of the system contributes to trust in human-robot teaming, by evoking mental model theory. We discuss how mental models are related to physical and behavioral characteristics of the robot, on the one hand, and affective and behavioral outcomes, such as trust and system use/disuse/misuse, on the other.  We expand upon our discussion by providing recommendations for best practices in human-robot team research and design and other systems using artificial intelligence.


Critical Considerations for Human-Robot Interface Development

AAAI Conferences

The purpose of this paper is to draw upon the vast bank of Human Factors Research and indicate how the existing results may be applied to the field of Human-Robotic interfaces (HRIs). HRI development tends to be an after thought, as researchers approach the problem from an engineering perspective. Such a perspective implies that the HRI is designed and developed after the majority of the robotic system design has been completed. Additionally, many researchers claim that their HRI is "intuitive", "easy to use", etc. without including actual users in the design process or performing proper user testing. This paper attempts to indicate the importance of developing an HRI that meets the users' needs and requirements while simultaneously developing the robot system. There exists a vast pool of Human Factors research based upon complex systems. This research contains many results and theories that may be applied to the development of HRIs.


Telepresence Robots as a Research Platform for AI

AAAI Conferences

Recently, various commercial telepresence robots have become available to the broader public. Here, we present the telepresence domain as a research platform for (re-)integrating AI. With MITRO: Maastricht Intelligent Telepresence RObot, we built a low-cost working prototype of a robot system specifically designed for augmented and autonomous telepresence. Telepresence robots can be deployed in a wide range of application domains, and augmented presence with assisted control can greatly improve the experience for the user. The research domains that we are focusing on are human robot interaction, navigation and perception.


Using Doctrines for Human-Robot Collaboration to Guide Ethical Behavior

AAAI Conferences

In this paper, we consider the issue of guiding ethical behavior in human-robot teams from a systemic viewpoint. Considering a team as a sociotechnical complex, we look at how responsibility for actions can arise through the interaction between the different actors in the team while playing specific roles. We define the notions of role, discuss how they establish a social network, and then use logical notions of multi-agent trust to formalize responsibility as accountability against capabilities that are invoked during collaboration.