Not enough data to create a plot.
Try a different view from the menu above.
Caldwell, Darwin
Human-Like Robot Impedance Regulation Skill Learning from Human-Human Demonstrations
Li, Chenzui, Wu, Xi, Liu, Junjia, Teng, Tao, Chen, Yiming, Calinon, Sylvain, Caldwell, Darwin, Chen, Fei
--Humans are experts in collaborating with others physically by regulating compliance behaviors based on the perception of their partners' states and the task requirements. Enabling robots to develop proficiency in human collaboration skills can facilitate more efficient human-robot collaboration (HRC). This paper introduces an innovative impedance regulation skill learning framework for achieving HRC in multiple physical collaborative tasks. The framework is designed to adjust the robot compliance to the human partner's states while adhering to reference trajectories provided by human-human demonstrations. Specifically, electromyography (EMG) signals from human muscles are collected and analyzed to extract limb impedance, representing compliance behaviors during demonstrations. Human endpoint motions are captured and represented using a probabilistic learning method to create reference trajectories and corresponding impedance profiles. Meanwhile, an LSTM-based module is implemented to develop task-oriented impedance regulation policies by mapping the muscle synergistic contributions between two demonstrators. Finally, we propose a whole-body impedance controller for a human-like robot, coordinating joint outputs to achieve the desired impedance and reference trajectory during task execution. Experimental validation was conducted through a collaborative transportation task and two interactive T ai Chi pushing hands tasks, demonstrating superior performance from the perspective of interactive forces compared to a constant impedance control method. OLLABORA TIVE robots (cobots) have emerged as a solution for more efficient human-robot collaboration (HRC) in both industrial and domestic scenarios. Co-manipulation outperforms fully robotic manipulation by offering enhanced flexibility and effectiveness while surpasses fully human manipulation by reducing labor costs, maintaining concentration, and minimizing errors due to fatigue [1]. This work was supported in part by the Research Grants Council of the Hong Kong SAR under Grant 24209021, 14222722, 14211723 and C7100-22GF and in part by InnoHK of the Government of Hong Kong via the Hong Kong Centre for Logistics Robotics. Darwin Caldwell is with the Department of Advanced Robotics, Istituto Italiano di Tecnologia, 16163 Genoa, Italy (e-mail: darwin.caldwell@iit.it).
Human-Humanoid Robots Cross-Embodiment Behavior-Skill Transfer Using Decomposed Adversarial Learning from Demonstration
Liu, Junjia, Li, Zhuo, Yu, Minghao, Dong, Zhipeng, Calinon, Sylvain, Caldwell, Darwin, Chen, Fei
Humanoid robots are envisioned as embodied intelligent agents capable of performing a wide range of human-level loco-manipulation tasks, particularly in scenarios requiring strenuous and repetitive labor. However, learning these skills is challenging due to the high degrees of freedom of humanoid robots, and collecting sufficient training data for humanoid is a laborious process. Given the rapid introduction of new humanoid platforms, a cross-embodiment framework that allows generalizable skill transfer is becoming increasingly critical. To address this, we propose a transferable framework that reduces the data bottleneck by using a unified digital human model as a common prototype and bypassing the need for re-training on every new robot platform. The model learns behavior primitives from human demonstrations through adversarial imitation, and the complex robot structures are decomposed into functional components, each trained independently and dynamically coordinated. Task generalization is achieved through a human-object interaction graph, and skills are transferred to different robots via embodiment-specific kinematic motion retargeting and dynamic fine-tuning. Our framework is validated on five humanoid robots with diverse configurations, demonstrating stable loco-manipulation and highlighting its effectiveness in reducing data requirements and increasing the efficiency of skill transfer across platforms.
Whole-Body Control on Non-holonomic Mobile Manipulation for Grapevine Winter Pruning Automation
Teng, Tao, Fernandes, Miguel, Gatti, Matteo, Poni, Stefano, Semini, Claudio, Caldwell, Darwin, Chen, Fei
Mobile manipulators that combine mobility and manipulability, are increasingly being used for various unstructured application scenarios in the field, e.g. vineyards. Therefore, the coordinated motion of the mobile base and manipulator is an essential feature of the overall performance. In this paper, we explore a whole-body motion controller of a robot which is composed of a 2-DoFs non-holonomic wheeled mobile base with a 7-DoFs manipulator (non-holonomic wheeled mobile manipulator, NWMM) This robotic platform is designed to efficiently undertake complex grapevine pruning tasks. In the control framework, a task priority coordinated motion of the NWMM is guaranteed. Lower-priority tasks are projected into the null space of the top-priority tasks so that higher-priority tasks are completed without interruption from lower-priority tasks. The proposed controller was evaluated in a grapevine spur pruning experiment scenario.
Learning Collaborative Impedance-Based Robot Behaviors
Rozo, Leonel Dario (Istituto Italiano di Tecnologia) | Calinon, Sylvain (Istituto Italiano di Tecnologia) | Caldwell, Darwin (Istituto Italiano di Tecnologia) | Jimenez, Pablo (Researcher, Institut de Robotica i Informatica Industrial) | Torras, Carme (Institut de Robotica i Informatica Industrial)
Research in learning from demonstration has focused on transferring movements from humans to robots. However, a need is arising for robots that do not just replicate the task on their own, but that also interact with humans in a safe and natural way to accomplish tasks cooperatively. Robots with variable impedance capabilities opens the door to new challenging applications, where the learning algorithms must be extended by encapsulating force and vision information. In this paper we propose a framework to transfer impedance-based behaviors to a torque-controlled robot by kinesthetic teaching. The proposed model encodes the examples as a task-parameterized statistical dynamical system, where the robot impedance is shaped by estimating virtual stiffness matrices from the set of demonstrations. A collaborative assembly task is used as testbed. The results show that the model can be used to modify the robot impedance along task execution to facilitate the collaboration, by triggering stiff and compliant behaviors in an on-line manner to adapt to the user's actions.