Naughton, Patrick
Analysis and Perspectives on the ANA Avatar XPRIZE Competition
Hauser, Kris, Watson, Eleanor, Bae, Joonbum, Bankston, Josh, Behnke, Sven, Borgia, Bill, Catalano, Manuel G., Dafarra, Stefano, van Erp, Jan B. F., Ferris, Thomas, Fishel, Jeremy, Hoffman, Guy, Ivaldi, Serena, Kanehiro, Fumio, Kheddar, Abderrahmane, Lannuzel, Gaelle, Morie, Jacqueline Ford, Naughton, Patrick, NGuyen, Steve, Oh, Paul, Padir, Taskin, Pippine, Jim, Park, Jaeheung, Pucci, Daniele, Vaz, Jean, Whitney, Peter, Wu, Peggy, Locke, David
The ANA Avatar XPRIZE was a four-year competition to develop a robotic "avatar" system to allow a human operator to sense, communicate, and act in a remote environment as though physically present. The competition featured a unique requirement that judges would operate the avatars after less than one hour of training on the human-machine interfaces, and avatar systems were judged on both objective and subjective scoring metrics. This paper presents a unified summary and analysis of the competition from technical, judging, and organizational perspectives. We study the use of telerobotics technologies and innovations pursued by the competing teams in their avatar systems, and correlate the use of these technologies with judges' task performance and subjective survey ratings. It also summarizes perspectives from team leads, judges, and organizers about the competition's execution and impact to inform the future development of telerobotics and telepresence.
Integrating Open-World Shared Control in Immersive Avatars
Naughton, Patrick, Nam, James Seungbum, Stratton, Andrew, Hauser, Kris
Teleoperated avatar robots allow people to transport their manipulation skills to environments that may be difficult or dangerous to work in. Current systems are able to give operators direct control of many components of the robot to immerse them in the remote environment, but operators still struggle to complete tasks as competently as they could in person. We present a framework for incorporating open-world shared control into avatar robots to combine the benefits of direct and shared control. This framework preserves the fluency of our avatar interface by minimizing obstructions to the operator's view and using the same interface for direct, shared, and fully autonomous control. In a human subjects study (N=19), we find that operators using this framework complete a range of tasks significantly more quickly and reliably than those that do not.