Chiou, Manolis
A Framework for Semantics-based Situational Awareness during Mobile Robot Deployments
Ruan, Tianshu, Ramesh, Aniketh, Wang, Hao, Johnstone-Morfoisse, Alix, Altindal, Gokcenur, Norman, Paul, Nikolaou, Grigoris, Stolkin, Rustam, Chiou, Manolis
--Deployment of robots into hazardous environments typically involves a "Human-Robot T eaming" (HRT) paradigm, in which a human supervisor interacts with a remotely operating robot inside the hazardous zone. Situational A wareness (SA) is vital for enabling HRT, to support navigation, planning, and decision-making. This paper explores issues of higher-level "semantic" information and understanding in SA. In semi-autonomous, or variable-autonomy paradigms, different types of semantic information may be important, in different ways, for both the human operator and an autonomous agent controlling the robot. We propose a generalizable framework for acquiring and combining multiple modalities of semantic-level SA during remote deployments of mobile robots. We demonstrate the framework with an example application of search and rescue (SAR) in disaster response robotics. We propose a set of "environment semantic indicators" that can reflect a variety of different types of semantic information, e.g. Based on these indicators, we propose a metric to describe the overall situation of the environment called "Situational Semantic Richness (SSR)". This metric combines multiple semantic indicators to summarise the overall situation. The SSR indicates if an information-rich and complex situation has been encountered, which may require advanced reasoning for robots and humans and hence the attention of the expert human operator . The framework is tested on a Jackal robot in a mock-up disaster response environment. Experimental results demonstrate that the proposed semantic indicators are sensitive to changes in different modalities of semantic information in different scenes, and the SSR metric reflects overall semantic changes in the situations encountered. Situational A wareness (SA) is vital for robots deployed in the field to function with sufficient autonomy, resiliency, and robustness.
The ATTUNE model for Artificial Trust Towards Human Operators
Petousakis, Giannis, Cangelosi, Angelo, Stolkin, Rustam, Chiou, Manolis
This paper presents a novel method to quantify Trust in HRI. It proposes an HRI framework for estimating the Robot Trust towards the Human in the context of a narrow and specified task. The framework produces a real-time estimation of an AI agent's Artificial Trust towards a Human partner interacting with a mobile teleoperation robot. The approach for the framework is based on principles drawn from Theory of Mind, including information about the human state, action, and intent. The framework creates the ATTUNE model for Artificial Trust Towards Human Operators. The model uses metrics on the operator's state of attention, navigational intent, actions, and performance to quantify the Trust towards them. The model is tested on a pre-existing dataset that includes recordings (ROSbags) of a human trial in a simulated disaster response scenario. The performance of ATTUNE is evaluated through a qualitative and quantitative analysis. The results of the analyses provide insight into the next stages of the research and help refine the proposed approach.
Learning effects in variable autonomy human-robot systems: how much training is enough?
Chiou, Manolis, Talha, Mohammed, Stolkin, Rustam
This paper investigates learning effects and human operator training practices in variable autonomy robotic systems. These factors are known to affect performance of a human-robot system and are frequently overlooked. We present the results from an experiment inspired by a search and rescue scenario in which operators remotely controlled a mobile robot with either Human-Initiative (HI) or Mixed-Initiative (MI) control. Evidence suggests learning in terms of primary navigation task and secondary (distractor) task performance. Further evidence is provided that MI and HI performance in a pure navigation task is equal. Lastly, guidelines are proposed for experimental design and operator training practices.
A Supervised Machine Learning Approach to Operator Intent Recognition for Teleoperated Mobile Robot Navigation
Tsagkournis, Evangelos, Panagopoulos, Dimitris, Petousakis, Giannis, Nikolaou, Grigoris, Stolkin, Rustam, Chiou, Manolis
Abstract: In applications that involve human-robot interaction (HRI), human-robot teaming (HRT), and cooperative human-machine systems, the inference of the human partner's intent is of critical importance. This paper presents a method for the inference of the human operator's navigational intent, in the context of mobile robots that provide full or partial (e.g., shared control) teleoperation. We propose the Machine Learning Operator Intent Inference (MLOII) method, which a) processes spatial data collected by the robot's sensors; b) utilizes a supervised machine learning algorithm to estimate the operator's most probable navigational goal online. The proposed method's ability to reliably and efficiently infer the intent of the human operator is experimentally evaluated in realistically simulated exploration and remote inspection scenarios. The results in terms of accuracy and uncertainty indicate that the proposed method is comparable to another state-of-the-art method found in the literature.
Robot Health Indicator: A Visual Cue to Improve Level of Autonomy Switching Systems
Ramesh, Aniketh, Englund, Madeleine, Theodorou, Andreas, Stolkin, Rustam, Chiou, Manolis
Using different Levels of Autonomy (LoA), a human operator can vary the extent of control they have over a robot's actions. LoAs enable operators to mitigate a robot's performance degradation or limitations in the its autonomous capabilities. However, LoA regulation and other tasks may often overload an operator's cognitive abilities. Inspired by video game user interfaces, we study if adding a 'Robot Health Bar' to the robot control UI can reduce the cognitive demand and perceptual effort required for LoA regulation while promoting trust and transparency. This Health Bar uses the robot vitals and robot health framework to quantify and present runtime performance degradation in robots. Results from our pilot study indicate that when using a health bar, operators used to manual control more to minimise the risk of robot failure during high performance degradation. It also gave us insights and lessons to inform subsequent experiments on human-robot teaming.
A Hierarchical Variable Autonomy Mixed-Initiative Framework for Human-Robot Teaming in Mobile Robotics
Panagopoulos, Dimitris, Petousakis, Giannis, Ramesh, Aniketh, Ruan, Tianshu, Nikolaou, Grigoris, Stolkin, Rustam, Chiou, Manolis
This paper presents a Mixed-Initiative (MI) framework for addressing the problem of control authority transfer between a remote human operator and an AI agent when cooperatively controlling a mobile robot. Our Hierarchical Expert-guided Mixed-Initiative Control Switcher (HierEMICS) leverages information on the human operator's state and intent. The control switching policies are based on a criticality hierarchy. An experimental evaluation was conducted in a high-fidelity simulated disaster response and remote inspection scenario, comparing HierEMICS with a state-of-the-art Expert-guided Mixed-Initiative Control Switcher (EMICS) in the context of mobile robot navigation. Results suggest that HierEMICS reduces conflicts for control between the human and the AI agent, which is a fundamental challenge in both the MI control paradigm and also in the related shared control paradigm. Additionally, we provide statistically significant evidence of improved, navigational safety (i.e., fewer collisions), LOA switching efficiency, and conflict for control reduction.
A Taxonomy of Semantic Information in Robot-Assisted Disaster Response
Ruan, Tianshu, Wang, Hao, Stolkin, Rustam, Chiou, Manolis
This paper proposes a taxonomy of semantic information in robot-assisted disaster response. Robots are increasingly being used in hazardous environment industries and emergency response teams to perform various tasks. Operational decision-making in such applications requires a complex semantic understanding of environments that are remote from the human operator. Low-level sensory data from the robot is transformed into perception and informative cognition. Currently, such cognition is predominantly performed by a human expert, who monitors remote sensor data such as robot video feeds. This engenders a need for AI-generated semantic understanding capabilities on the robot itself. Current work on semantics and AI lies towards the relatively academic end of the research spectrum, hence relatively removed from the practical realities of first responder teams. We aim for this paper to be a step towards bridging this divide. We first review common robot tasks in disaster response and the types of information such robots must collect. We then organize the types of semantic features and understanding that may be useful in disaster operations into a taxonomy of semantic information. We also briefly review the current state-of-the-art semantic understanding techniques. We highlight potential synergies, but we also identify gaps that need to be bridged to apply these ideas. We aim to stimulate the research that is needed to adapt, robustify, and implement state-of-the-art AI semantics methods in the challenging conditions of disasters and first responder scenarios.
Human operator cognitive availability aware Mixed-Initiative control
Petousakis, Giannis, Chiou, Manolis, Nikolaou, Grigoris, Stolkin, Rustam
This paper presents a Cognitive Availability Aware Mixed-Initiative Controller for remotely operated mobile robots. The controller enables dynamic switching between different levels of autonomy (LOA), initiated by either the AI or the human operator. The controller leverages a state-of-the-art computer vision method and an off-the-shelf web camera to infer the cognitive availability of the operator and inform the AI-initiated LOA switching. This constitutes a qualitative advancement over previous Mixed-Initiative (MI) controllers. The controller is evaluated in a disaster response experiment, in which human operators have to conduct an exploration task with a remote robot. MI systems are shown to effectively assist the operators, as demonstrated by quantitative and qualitative results in performance and workload. Additionally, some insights into the experimental difficulties of evaluating complex MI controllers are presented.
Human-Initiative Variable Autonomy: An Experimental Analysis of the Interactions Between a Human Operator and a Remotely Operated Mobile Robot which also Possesses Autonomous Capabilities
Chiou, Manolis (University of Birmingham) | Bieksaite, Goda (University of Birmingham) | Hawes, Nick (University of Birmingham) | Stolkin, Rustam (University of Birmingham)
This paper presents an experimental analysis of the Human-Robot Interaction (HRI) between human operators and a Human-Initiative (HI) variable-autonomy mobile robot during navigation tasks. In our HI system the human operator is able to switch the Level of Autonomy (LOA) on-the-fly between teleoperation (joystick control) and autonomous control (robot navigates autonomously towards waypoints selected by the human). We present statistically-validated results on: the preferred LOA of human operators; the amount of time spent in each LOA; the frequency of human-initiated LOA switches; and human perceptions of task difficulty. We also investigate the correlation between these variables; their correlation with performance in the primary task (navigation of the robot); and their correlation with performance in a secondary task, in which humans are required to perform mental rotations of 3D objects, while simultaneously trying to continue with the primary task of driving the robot.