Human-Robot Red Teaming for Safety-Aware Reasoning

Sheetz, Emily, Zemler, Emma, Savchenko, Misha, Rainen, Connor, Holum, Erik, Graf, Jodi, Albright, Andrew, Azimi, Shaun, Kuipers, Benjamin

arXiv.org Artificial Intelligence 

-- While much research explores improving robot capabilities, there is a deficit in researching how robots are expected to perform tasks safely, especially in high-risk problem domains. Robots must earn the trust of human operators in order to be effective collaborators in safety-critical tasks, specifically those where robots operate in human environments. We propose the human-robot red teaming paradigm for safety-aware reasoning . We expect humans and robots to work together to challenge assumptions about an environment and explore the space of hazards that may arise. This exploration will enable robots to perform safety-aware reasoning, specifically hazard identification, risk assessment, risk mitigation, and safety reporting. We demonstrate that: (a) human-robot red teaming allows human-robot teams to plan to perform tasks safely in a variety of domains, and (b) robots with different embodiments can learn to operate safely in two different environments--a lunar habitat and a household--with varying definitions of safety. T aken together, our work on human-robot red teaming for safety-aware reasoning demonstrates the feasibility of this approach for safely operating and promoting trust on human-robot teams in safety-critical problem domains. I. INTRODUCTION Enabling robots to reason over risks is a crucial capability of performing collaborative assistive tasks in safety-critical domains.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found