Not enough data to create a plot.
Try a different view from the menu above.
Dafarra, Stefano
Online DNN-driven Nonlinear MPC for Stylistic Humanoid Robot Walking with Step Adjustment
Romualdi, Giulio, Viceconte, Paolo Maria, Moretti, Lorenzo, Sorrentino, Ines, Dafarra, Stefano, Traversaro, Silvio, Pucci, Daniele
This paper presents a three-layered architecture that enables stylistic locomotion with online contact location adjustment. Our method combines an autoregressive Deep Neural Network (DNN) acting as a trajectory generation layer with a model-based trajectory adjustment and trajectory control layers. The DNN produces centroidal and postural references serving as an initial guess and regularizer for the other layers. Being the DNN trained on human motion capture data, the resulting robot motion exhibits locomotion patterns, resembling a human walking style. The trajectory adjustment layer utilizes non-linear optimization to ensure dynamically feasible center of mass (CoM) motion while addressing step adjustments. We compare two implementations of the trajectory adjustment layer: one as a receding horizon planner (RHP) and the other as a model predictive controller (MPC). To enhance MPC performance, we introduce a Kalman filter to reduce measurement noise. The filter parameters are automatically tuned with a Genetic Algorithm. Experimental results on the ergoCub humanoid robot demonstrate the system's ability to prevent falls, replicate human walking styles, and withstand disturbances up to 68 Newton. Website: https://sites.google.com/view/dnn-mpc-walking Youtube video: https://www.youtube.com/watch?v=x3tzEfxO-xQ
XBG: End-to-end Imitation Learning for Autonomous Behaviour in Human-Robot Interaction and Collaboration
Cardenas-Perez, Carlos, Romualdi, Giulio, Elobaid, Mohamed, Dafarra, Stefano, L'Erario, Giuseppe, Traversaro, Silvio, Morerio, Pietro, Del Bue, Alessio, Pucci, Daniele
This paper presents XBG (eXteroceptive Behaviour Generation), a multimodal end-to-end Imitation Learning (IL) system for a whole-body autonomous humanoid robot used in real-world Human-Robot Interaction (HRI) scenarios. The main contribution of this paper is an architecture for learning HRI behaviours using a data-driven approach. Through teleoperation, a diverse dataset is collected, comprising demonstrations across multiple HRI scenarios, including handshaking, handwaving, payload reception, walking, and walking with a payload. After synchronizing, filtering, and transforming the data, different Deep Neural Networks (DNN) models are trained. The final system integrates different modalities comprising exteroceptive and proprioceptive sources of information to provide the robot with an understanding of its environment and its own actions. The robot takes sequence of images (RGB and depth) and joints state information during the interactions and then reacts accordingly, demonstrating learned behaviours. By fusing multimodal signals in time, we encode new autonomous capabilities into the robotic platform, allowing the understanding of context changes over time. The models are deployed on ergoCub, a real-world humanoid robot, and their performance is measured by calculating the success rate of the robot's behaviour under the mentioned scenarios.
iCub3 Avatar System: Enabling Remote Fully-Immersive Embodiment of Humanoid Robots
Dafarra, Stefano, Pattacini, Ugo, Romualdi, Giulio, Rapetti, Lorenzo, Grieco, Riccardo, Darvish, Kourosh, Milani, Gianluca, Valli, Enrico, Sorrentino, Ines, Viceconte, Paolo Maria, Scalzo, Alessandro, Traversaro, Silvio, Sartore, Carlotta, Elobaid, Mohamed, Guedelha, Nuno, Herron, Connor, Leonessa, Alexander, Draicchio, Francesco, Metta, Giorgio, Maggiali, Marco, Pucci, Daniele
We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia (IIT). More precisely, the contribution of the paper is twofold: first, we present the humanoid iCub3 as a robotic avatar which integrates the latest significant improvements after about fifteen years of development of the iCub series; second, we present a versatile avatar system enabling humans to embody humanoid robots encompassing aspects such as locomotion, manipulation, voice, and face expressions with comprehensive sensory feedback including visual, auditory, haptic, weight, and touch modalities. We validate the system by implementing several avatar architecture instances, each tailored to specific requirements. First, we evaluated the optimized architecture for verbal, non-verbal, and physical interactions with a remote recipient. This testing involved the operator in Genoa and the avatar in the Biennale di Venezia, Venice - about 290 Km away - thus allowing the operator to visit remotely the Italian art exhibition. Second, we evaluated the optimised architecture for recipient physical collaboration and public engagement on-stage, live, at the We Make Future show, a prominent world digital innovation festival. In this instance, the operator was situated in Genoa while the avatar operates in Rimini - about 300 Km away - interacting with a recipient who entrusted the avatar a payload to carry on stage before an audience of approximately 2000 spectators. Third, we present the architecture implemented by the iCub Team for the ANA Avatar XPrize competition.
Analysis and Perspectives on the ANA Avatar XPRIZE Competition
Hauser, Kris, Watson, Eleanor, Bae, Joonbum, Bankston, Josh, Behnke, Sven, Borgia, Bill, Catalano, Manuel G., Dafarra, Stefano, van Erp, Jan B. F., Ferris, Thomas, Fishel, Jeremy, Hoffman, Guy, Ivaldi, Serena, Kanehiro, Fumio, Kheddar, Abderrahmane, Lannuzel, Gaelle, Morie, Jacqueline Ford, Naughton, Patrick, NGuyen, Steve, Oh, Paul, Padir, Taskin, Pippine, Jim, Park, Jaeheung, Pucci, Daniele, Vaz, Jean, Whitney, Peter, Wu, Peggy, Locke, David
The ANA Avatar XPRIZE was a four-year competition to develop a robotic "avatar" system to allow a human operator to sense, communicate, and act in a remote environment as though physically present. The competition featured a unique requirement that judges would operate the avatars after less than one hour of training on the human-machine interfaces, and avatar systems were judged on both objective and subjective scoring metrics. This paper presents a unified summary and analysis of the competition from technical, judging, and organizational perspectives. We study the use of telerobotics technologies and innovations pursued by the competing teams in their avatar systems, and correlate the use of these technologies with judges' task performance and subjective survey ratings. It also summarizes perspectives from team leads, judges, and organizers about the competition's execution and impact to inform the future development of telerobotics and telepresence.
Codesign of Humanoid Robots for Ergonomy Collaboration with Multiple Humans via Genetic Algorithms and Nonlinear Optimization
Sartore, Carlotta, Rapetti, Lorenzo, Bergonti, Fabio, Dafarra, Stefano, Traversaro, Silvio, Pucci, Daniele
Ergonomics is a key factor to consider when designing control architectures for effective physical collaborations between humans and humanoid robots. In contrast, ergonomic indexes are often overlooked in the robot design phase, which leads to suboptimal performance in physical human-robot interaction tasks. This paper proposes a novel methodology for optimizing the design of humanoid robots with respect to ergonomic indicators associated with the interaction of multiple agents. Our approach leverages a dynamic and kinematic parameterization of the robot link and motor specifications to seek for optimal robot designs using a bilevel optimization approach. Specifically, a genetic algorithm first generates robot designs by selecting the link and motor characteristics. Then, we use nonlinear optimization to evaluate interaction ergonomy indexes during collaborative payload lifting with different humans and weights. To assess the effectiveness of our approach, we compare the optimal design obtained using bilevel optimization against the design obtained using nonlinear optimization. Our results show that the proposed approach significantly improves ergonomics in terms of energy expenditure calculated in two reference scenarios involving static and dynamic robot motions. We plan to apply our methodology to drive the design of the ergoCub2 robot, a humanoid intended for optimal physical collaboration with humans in diverse environments
Whole-Body Trajectory Optimization for Robot Multimodal Locomotion
L'Erario, Giuseppe, Nava, Gabriele, Romualdi, Giulio, Bergonti, Fabio, Razza, Valentino, Dafarra, Stefano, Pucci, Daniele
The general problem of planning feasible trajectories for multimodal robots is still an open challenge. This paper presents a whole-body trajectory optimisation approach that addresses this challenge by combining methods and tools developed for aerial and legged robots. First, robot models that enable the presented whole-body trajectory optimisation framework are presented. The key model is the so-called robot centroidal momentum, the dynamics of which is directly related to the models of the robot actuation for aerial and terrestrial locomotion. Then, the paper presents how these models can be employed in an optimal control problem to generate either terrestrial or aerial locomotion trajectories with a unified approach. The optimisation problem considers robot kinematics, momentum, thrust forces and their bounds. The overall approach is validated using the multimodal robot iRonCub, a flying humanoid robot that expresses a degree of terrestrial and aerial locomotion. To solve the associated optimal trajectory generation problem, we employ ADAM, a custom-made open-source library that implements a collection of algorithms for calculating rigid-body dynamics using CasADi.