Framework for Multi-Human Multi-Robot Interaction: Impact of Operational Context and Team Configuration on Interaction Task Demands

AAAI Conferences

Increasing prevalence and complexity of robotic and autonomous systems (RAS) and promising applications of hybrid multi-human multi-RAS teams across a wide range of domains pose a challenge to user interface designers, autonomy researchers, system developers, program managers, and manning/personnel analysts. These stakeholders need a principled, generalizable approach to analyze these teams in an operational context to design effective team configurations and human-system interfaces. To meet this need, we have developed a theoretical framework and software simulation that supports analysis to understand and predict the type and number of human-RAS and human-human interaction task demands imposed by the mission and operational context. We extend previous research to include multi-human multi-RAS teams, and emphasize generalizability across a wide range of current and future RAS technologies and military and commercial applications. To ensure that our framework is grounded in mission and operational realities, we validated the framework structure with domain experts. The framework characterizes Operational Context, Team Configuration, and Interaction Task Demands, and defines relationships between these constructs. These relationships are complex, and prediction of Interaction Task Demands quickly becomes difficult even for small teams. Therefore, to support analysis, we developed a software simulation (Beer, Rieth, Tran, & Cook, 2016) that predicts these demands and allows testing and validation of the framework. The framework and simulation presented here provide a step forward in the development of a systematic, well-defined, principled process to analyze the design tradeoffs and requirements for a wide range of future hybrid multi-human multi-RAS teams.


2003 AAAI Robot Competition and Exhibition

AI Magazine

The Twelfth Annual Association for the Advancement of Artificial Intelligence (AAAI) Robot Competition and Exhibition was held in Acapulco, Mexico, in conjunction with the Eighteenth International Joint Conference on Artificial Intelligence. The events included the Robot Host and Urban Search and Rescue competitions, the AAAI Robot Challenge, and the Robot Exhibition. In the Robot Host event, the robots had to act as mobile information servers and guides to the exhibit area of the conference. In the Urban Search and Rescue competition, teams attempted to find victims in a simulated disaster area using teleoperated, semiautonomous, and autonomous robots. The AAAI Robot Challenge is a noncompetitive event where the robots attempt to attend the conference by locating the registration booth, registering for the conference, and then giving a talk to an audience. Finally, the Robot Exhibition is an opportunity for robotics researchers to demonstrate their robots' capabilities to conference attendees. The three days of events were capped by the two Robot Challenge participants giving talks and answering questions from the audience.


A Framework in which Robots and Humans Help Each Other

AAAI Conferences

Within the context of human/multi-robot teams, the "help me help you" paradigm offers different opportunities. A team of robots can help a human operator accomplish a goal, and a human operator can help a team of robots accomplish the same, or a different, goal. Two scenarios are examined here. First, a team of robots helps a human operator search a remote facility by recognizing objects of interest. Second, the human operator helps the robots improve their position (localization) information by providing quality control feedback.


Challenges in Collaborative Scheduling of Human-Robot Teams

AAAI Conferences

We study the scheduling of human-robot teams where the human and robotic agents share decision-making authority over scheduling decisions. Our goal is to design AI scheduling techniques that account for how people make decisions under different control schema.


Goal-Based Teleoperation for Robot Manipulation

AAAI Conferences

As robot algorithms for manipulation and navigation advance and robot hardware is becoming more robust and readily available, industry demands robots to perform more sophisticated tasks in our homes and factories. For many years, direct teleoperation was the most common and traditional form of control for robots. However, due to the complexity of robot motion, human operators must focus most of their attention on solving low-level motion control which leads to their heightened cognitive load. In this abstract, we propose a goal-directed approach to programming robots by providing a tool to model the world and provide goal states for a given task. Operators will be able to set the initial positions of objects and their affordances along with their goal positions by imposing three dimensional (3D) templates on point clouds. Robots will solve the given task using the combination of task and motion planning algorithms.