Elobaid, Mohamed
XBG: End-to-end Imitation Learning for Autonomous Behaviour in Human-Robot Interaction and Collaboration
Cardenas-Perez, Carlos, Romualdi, Giulio, Elobaid, Mohamed, Dafarra, Stefano, L'Erario, Giuseppe, Traversaro, Silvio, Morerio, Pietro, Del Bue, Alessio, Pucci, Daniele
This paper presents XBG (eXteroceptive Behaviour Generation), a multimodal end-to-end Imitation Learning (IL) system for a whole-body autonomous humanoid robot used in real-world Human-Robot Interaction (HRI) scenarios. The main contribution of this paper is an architecture for learning HRI behaviours using a data-driven approach. Through teleoperation, a diverse dataset is collected, comprising demonstrations across multiple HRI scenarios, including handshaking, handwaving, payload reception, walking, and walking with a payload. After synchronizing, filtering, and transforming the data, different Deep Neural Networks (DNN) models are trained. The final system integrates different modalities comprising exteroceptive and proprioceptive sources of information to provide the robot with an understanding of its environment and its own actions. The robot takes sequence of images (RGB and depth) and joints state information during the interactions and then reacts accordingly, demonstrating learned behaviours. By fusing multimodal signals in time, we encode new autonomous capabilities into the robotic platform, allowing the understanding of context changes over time. The models are deployed on ergoCub, a real-world humanoid robot, and their performance is measured by calculating the success rate of the robot's behaviour under the mentioned scenarios.
iCub3 Avatar System: Enabling Remote Fully-Immersive Embodiment of Humanoid Robots
Dafarra, Stefano, Pattacini, Ugo, Romualdi, Giulio, Rapetti, Lorenzo, Grieco, Riccardo, Darvish, Kourosh, Milani, Gianluca, Valli, Enrico, Sorrentino, Ines, Viceconte, Paolo Maria, Scalzo, Alessandro, Traversaro, Silvio, Sartore, Carlotta, Elobaid, Mohamed, Guedelha, Nuno, Herron, Connor, Leonessa, Alexander, Draicchio, Francesco, Metta, Giorgio, Maggiali, Marco, Pucci, Daniele
We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia (IIT). More precisely, the contribution of the paper is twofold: first, we present the humanoid iCub3 as a robotic avatar which integrates the latest significant improvements after about fifteen years of development of the iCub series; second, we present a versatile avatar system enabling humans to embody humanoid robots encompassing aspects such as locomotion, manipulation, voice, and face expressions with comprehensive sensory feedback including visual, auditory, haptic, weight, and touch modalities. We validate the system by implementing several avatar architecture instances, each tailored to specific requirements. First, we evaluated the optimized architecture for verbal, non-verbal, and physical interactions with a remote recipient. This testing involved the operator in Genoa and the avatar in the Biennale di Venezia, Venice - about 290 Km away - thus allowing the operator to visit remotely the Italian art exhibition. Second, we evaluated the optimised architecture for recipient physical collaboration and public engagement on-stage, live, at the We Make Future show, a prominent world digital innovation festival. In this instance, the operator was situated in Genoa while the avatar operates in Rimini - about 300 Km away - interacting with a recipient who entrusted the avatar a payload to carry on stage before an audience of approximately 2000 spectators. Third, we present the architecture implemented by the iCub Team for the ANA Avatar XPrize competition.
Online Non-linear Centroidal MPC for Humanoid Robots Payload Carrying with Contact-Stable Force Parametrization
Elobaid, Mohamed, Romualdi, Giulio, Nava, Gabriele, Rapetti, Lorenzo, Mohamed, Hosameldin Awadalla Omer, Pucci, Daniele
Abstract-- In this paper we consider the problem of allowing a humanoid robot that is subject to a persistent disturbance, in the form of a payload-carrying task, to follow given planned footsteps. MPC is augmented with terms handling the disturbance and regularizing the parameter. Finally, the effect of using the parametrization on the computational time of the controller is briefly studied. The high-level control layer typically utilizes "template" models to reason about the center of mass and feet trajectories [2], while the whole-body control layer uses the robot full model to track the adapted trajectories (see Figure 1). This paper focuses on designing a high-level trajectory adjustment controller leveraging a Figure 1: The controller highlighted in a typical multi-layer template model to allow for humanoid robots locomotion bipedal locomotion control architecture.
A Control Approach for Human-Robot Ergonomic Payload Lifting
Rapetti, Lorenzo, Sartore, Carlotta, Elobaid, Mohamed, Tirupachuri, Yeshasvi, Draicchio, Francesco, Kawakami, Tomohiro, Yoshiike, Takahide, Pucci, Daniele
Collaborative robots can relief human operators from excessive efforts during payload lifting activities. Modelling the human partner allows the design of safe and efficient collaborative strategies. In this paper, we present a control approach for human-robot collaboration based on human monitoring through whole-body wearable sensors, and interaction modelling through coupled rigid-body dynamics. Moreover, a trajectory advancement strategy is proposed, allowing for online adaptation of the robot trajectory depending on the human motion. The resulting framework allows us to perform payload lifting tasks, taking into account the ergonomic requirements of the agents. Validation has been performed in an experimental scenario using the iCub3 humanoid robot and a human subject sensorized with the iFeel wearable system.