Goto

Collaborating Authors

 avatar system


iCub3 Avatar System: Enabling Remote Fully-Immersive Embodiment of Humanoid Robots

Dafarra, Stefano, Pattacini, Ugo, Romualdi, Giulio, Rapetti, Lorenzo, Grieco, Riccardo, Darvish, Kourosh, Milani, Gianluca, Valli, Enrico, Sorrentino, Ines, Viceconte, Paolo Maria, Scalzo, Alessandro, Traversaro, Silvio, Sartore, Carlotta, Elobaid, Mohamed, Guedelha, Nuno, Herron, Connor, Leonessa, Alexander, Draicchio, Francesco, Metta, Giorgio, Maggiali, Marco, Pucci, Daniele

arXiv.org Artificial Intelligence

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia (IIT). More precisely, the contribution of the paper is twofold: first, we present the humanoid iCub3 as a robotic avatar which integrates the latest significant improvements after about fifteen years of development of the iCub series; second, we present a versatile avatar system enabling humans to embody humanoid robots encompassing aspects such as locomotion, manipulation, voice, and face expressions with comprehensive sensory feedback including visual, auditory, haptic, weight, and touch modalities. We validate the system by implementing several avatar architecture instances, each tailored to specific requirements. First, we evaluated the optimized architecture for verbal, non-verbal, and physical interactions with a remote recipient. This testing involved the operator in Genoa and the avatar in the Biennale di Venezia, Venice - about 290 Km away - thus allowing the operator to visit remotely the Italian art exhibition. Second, we evaluated the optimised architecture for recipient physical collaboration and public engagement on-stage, live, at the We Make Future show, a prominent world digital innovation festival. In this instance, the operator was situated in Genoa while the avatar operates in Rimini - about 300 Km away - interacting with a recipient who entrusted the avatar a payload to carry on stage before an audience of approximately 2000 spectators. Third, we present the architecture implemented by the iCub Team for the ANA Avatar XPrize competition.


Analysis and Perspectives on the ANA Avatar XPRIZE Competition

Hauser, Kris, Watson, Eleanor, Bae, Joonbum, Bankston, Josh, Behnke, Sven, Borgia, Bill, Catalano, Manuel G., Dafarra, Stefano, van Erp, Jan B. F., Ferris, Thomas, Fishel, Jeremy, Hoffman, Guy, Ivaldi, Serena, Kanehiro, Fumio, Kheddar, Abderrahmane, Lannuzel, Gaelle, Morie, Jacqueline Ford, Naughton, Patrick, NGuyen, Steve, Oh, Paul, Padir, Taskin, Pippine, Jim, Park, Jaeheung, Pucci, Daniele, Vaz, Jean, Whitney, Peter, Wu, Peggy, Locke, David

arXiv.org Artificial Intelligence

The ANA Avatar XPRIZE was a four-year competition to develop a robotic "avatar" system to allow a human operator to sense, communicate, and act in a remote environment as though physically present. The competition featured a unique requirement that judges would operate the avatars after less than one hour of training on the human-machine interfaces, and avatar systems were judged on both objective and subjective scoring metrics. This paper presents a unified summary and analysis of the competition from technical, judging, and organizational perspectives. We study the use of telerobotics technologies and innovations pursued by the competing teams in their avatar systems, and correlate the use of these technologies with judges' task performance and subjective survey ratings. It also summarizes perspectives from team leads, judges, and organizers about the competition's execution and impact to inform the future development of telerobotics and telepresence.


Amplifying robotics capacities with a human touch: An immersive low-latency panoramic remote system

Li, Junjie, Li, Kang, Han, Dewei, Xu, Jian, Ma, Zhaoyuan

arXiv.org Artificial Intelligence

AI and robotics technologies have witnessed remarkable advancements in the past decade, revolutionizing work patterns and opportunities in various domains. The application of these technologies has propelled society towards an era of symbiosis between humans and machines. To facilitate efficient communication between humans and intelligent robots, we propose the "Avatar" system, an immersive low-latency panoramic human-robot interaction platform. We have designed and tested a prototype of a rugged mobile platform integrated with edge computing units, panoramic video capture devices, power batteries, robot arms, and network communication equipment. Under favorable network conditions, we achieved a low-latency high-definition panoramic visual experience with a delay of 357ms. Operators can utilize VR headsets and controllers for real-time immersive control of robots and devices. The system enables remote control over vast physical distances, spanning campuses, provinces, countries, and even continents (New York to Shenzhen). Additionally, the system incorporates visual SLAM technology for map and trajectory recording, providing autonomous navigation capabilities. We believe that this intuitive system platform can enhance efficiency and situational experience in human-robot collaboration, and with further advancements in related technologies, it will become a versatile tool for efficient and symbiotic cooperation between AI and humans.


Robust Immersive Telepresence and Mobile Telemanipulation: NimbRo wins ANA Avatar XPRIZE Finals

Schwarz, Max, Lenz, Christian, Memmesheimer, Raphael, Pätzold, Bastian, Rochow, Andre, Schreiber, Michael, Behnke, Sven

arXiv.org Artificial Intelligence

Abstract-- Robotic avatar systems promise to bridge distances and reduce the need for travel. We present the updated NimbRo avatar system, winner of the $5M grand prize at the international ANA Avatar XPRIZE competition, which required participants to build intuitive and immersive robotic telepresence systems that could be operated by briefly trained operators. Video and audio data are compressed using low-latency HEVC and Opus codecs. We propose a new locomotion control device with tunable resistance force. To increase flexibility, the robot's upper-body height can be adjusted by the operator. Top left: Operator judge controlling the avatar. Bottom left: VR view (cropped). Reducing the need In this paper, we present and discuss the updates and to travel is thus beneficial for many reasons. While voice extensions of the NimbRo avatar system (Figure 1) that we calls and video conferencing help, they cannot replace inperson made for our highly successful participation in the ANA meetings entirely due to lack of immersion and Avatar XPRIZE Finals in November 2022, where our team social interaction.


NimbRo wins ANA Avatar XPRIZE Immersive Telepresence Competition: Human-Centric Evaluation and Lessons Learned

Lenz, Christian, Schwarz, Max, Rochow, Andre, Pätzold, Bastian, Memmesheimer, Raphael, Schreiber, Michael, Behnke, Sven

arXiv.org Artificial Intelligence

Robotic avatar systems can enable immersive telepresence with locomotion, manipulation, and communication capabilities. We present such an avatar system, based on the key components of immersive 3D visualization and transparent force-feedback telemanipulation. Our avatar robot features an anthropomorphic upper body with dexterous hands. The remote human operator drives the arms and fingers through an exoskeleton-based operator station, which provides force feedback both at the wrist and for each finger. The robot torso is mounted on a holonomic base, providing omnidirectional locomotion on flat floors, controlled using a 3D rudder device. Finally, the robot features a 6D movable head with stereo cameras, which stream images to a VR display worn by the operator. Movement latency is hidden using spherical rendering. The head also carries a telepresence screen displaying an animated image of the operator's face, enabling direct interaction with remote persons. Our system won the \$10M ANA Avatar XPRIZE competition, which challenged teams to develop intuitive and immersive avatar systems that could be operated by briefly trained judges. We analyze our successful participation in the semifinals and finals and provide insight into our operator training and lessons learned. In addition, we evaluate our system in a user study that demonstrates its intuitive and easy usability.


The $10 Million ANA Avatar XPRIZE Competition Advanced Immersive Telepresence Systems

Behnke, Sven, Adams, Julie A., Locke, David

arXiv.org Artificial Intelligence

The $10M ANA Avatar XPRIZE aimed to create avatar systems that can transport human presence to remote locations in real time. The participants of this multi-year competition developed robotic systems that allow operators to see, hear, and interact with a remote environment in a way that feels as if they are truly there. On the other hand, people in the remote environment were given the impression that the operator was present inside the avatar robot. At the competition finals, held in November 2022 in Long Beach, CA, USA, the avatar systems were evaluated on their support for remotely interacting with humans, exploring new environments, and employing specialized skills. This article describes the competition stages with tasks and evaluation procedures, reports the results, presents the winning teams' approaches, and discusses lessons learned.


Team Northeastern's Approach to ANA XPRIZE Avatar Final Testing: A Holistic Approach to Telepresence and Lessons Learned

Luo, Rui, Wang, Chunpeng, Keil, Colin, Nguyen, David, Mayne, Henry, Alt, Stephen, Schwarm, Eric, Mendoza, Evelyn, Padır, Taşkın, Whitney, John Peter

arXiv.org Artificial Intelligence

This paper reports on Team Northeastern's Avatar system for telepresence, and our holistic approach to meet the ANA Avatar XPRIZE Final testing task requirements. The system features a dual-arm configuration with hydraulically actuated glove-gripper pair for haptic force feedback. Our proposed Avatar system was evaluated in the ANA Avatar XPRIZE Finals and completed all 10 tasks, scored 14.5 points out of 15.0, and received the 3rd Place Award. We provide the details of improvements over our first generation Avatar, covering manipulation, perception, locomotion, power, network, and controller design. We also extensively discuss the major lessons learned during our participation in the competition.


Artificial intelligence and the classroom of the future

#artificialintelligence

Imagine a classroom in the future where teachers are working alongside artificial intelligence partners to ensure no student gets left behind. The AI partner’s careful monitoring picks up on a student in the back who has been quiet and still for the whole class and the AI partner prompts the teacher to engage the student. When called on, the student asks a question. The teacher clarifies the material that has been presented and every student comes away with a better understanding of the lesson. This is part of a larger vision of future classrooms where human instruction and AI technology interact to improve educational environments and the learning experience. James Pustejovsky, the TJX Feldberg Professor of Computer Science, is working towards that vision with a team led by the University of Colorado Boulder, as part of the new $20 million National Science Foundation-funded AI Institute for Student-AI Teaming. The research will play a critical role in helping ensure the AI agent is a natural partner in the classroom, with language and vision capabilities, allowing it to not only hear what the teacher and each student is saying, but also notice gestures (pointing, shrugs, shaking a head), eye gaze, and facial expressions (student attitudes and emotions).


ObEN nabs $7.7M Series A as it looks to build a more human VR avatar

#artificialintelligence

When you're in virtual reality and start associating your limb and head movements with your onscreen avatar, that digital recreation really becomes an extension of who you are. ObEN is a startup launching out of HTC's new Vive X accelerator that is hoping to craft a more complete digital version of its users so that they can be drawn deeper into VR immersive experiences. The startup uses AI to recreate a user's face photo-realistically in 3D based on nothing more than a selfie, while also capturing the tone and intonation of a user's voice based on just a little voice recording. Today, the company is announcing $7.7 million in Series A funding led by CrestValue Capital and other Chinese investing partners. ObEN is looking to use the funds to build its team and scale its product a bit. A big problem with avatars in VR and video games more broadly relates to a problem called the "uncanny valley," a term that refers to when a digital human avatar ends up appearing really unsettling because its realistic, but just not quite familiar enough.