Tsagarakis, Nikos
A Laser-guided Interaction Interface for Providing Effective Robot Assistance to People with Upper Limbs Impairments
Torielli, Davide, Bertoni, Liana, Muratore, Luca, Tsagarakis, Nikos
Robotics has shown significant potential in assisting people with disabilities to enhance their independence and involvement in daily activities. Indeed, a societal long-term impact is expected in home-care assistance with the deployment of intelligent robotic interfaces. This work presents a human-robot interface developed to help people with upper limbs impairments, such as those affected by stroke injuries, in activities of everyday life. The proposed interface leverages on a visual servoing guidance component, which utilizes an inexpensive but effective laser emitter device. By projecting the laser on a surface within the workspace of the robot, the user is able to guide the robotic manipulator to desired locations, to reach, grasp and manipulate objects. Considering the targeted users, the laser emitter is worn on the head, enabling to intuitively control the robot motions with head movements that point the laser in the environment, which projection is detected with a neural network based perception module. The interface implements two control modalities: the first allows the user to select specific locations directly, commanding the robot to reach those points; the second employs a paper keyboard with buttons that can be virtually pressed by pointing the laser at them. These buttons enable a more direct control of the Cartesian velocity of the end-effector and provides additional functionalities such as commanding the action of the gripper. The proposed interface is evaluated in a series of manipulation tasks involving a 6DOF assistive robot manipulator equipped with 1DOF beak-like gripper. The two interface modalities are combined to successfully accomplish tasks requiring bimanual capacity that is usually affected in people with upper limbs impairments.
Wearable Haptics for a Marionette-inspired Teleoperation of Highly Redundant Robotic Systems
Torielli, Davide, Franco, Leonardo, Pozzi, Maria, Muratore, Luca, Malvezzi, Monica, Tsagarakis, Nikos, Prattichizzo, Domenico
The teleoperation of complex, kinematically redundant robots with loco-manipulation capabilities represents a challenge for human operators, who have to learn how to operate the many degrees of freedom of the robot to accomplish a desired task. In this context, developing an easy-to-learn and easy-to-use human-robot interface is paramount. Recent works introduced a novel teleoperation concept, which relies on a virtual physical interaction interface between the human operator and the remote robot equivalent to a "Marionette" control, but whose feedback was limited to only visual feedback on the human side. In this paper, we propose extending the "Marionette" interface by adding a wearable haptic interface to cope with the limitations given by the previous works. Leveraging the additional haptic feedback modality, the human operator gains full sensorimotor control over the robot, and the awareness about the robot's response and interactions with the environment is greatly improved. We evaluated the proposed interface and the related teleoperation framework with naive users, assessing the teleoperation performance and the user experience with and without haptic feedback. The conducted experiments consisted in a loco-manipulation mission with the CENTAURO robot, a hybrid leg-wheel quadruped with a humanoid dual-arm upper body.
Dynamic object goal pushing with mobile manipulators through model-free constrained reinforcement learning
Dadiotis, Ioannis, Mittal, Mayank, Tsagarakis, Nikos, Hutter, Marco
Non-prehensile pushing to move and reorient objects to a goal is a versatile loco-manipulation skill. In the real world, the object's physical properties and friction with the floor contain significant uncertainties, which makes the task challenging for a mobile manipulator. In this paper, we develop a learning-based controller for a mobile manipulator to move an unknown object to a desired position and yaw orientation through a sequence of pushing actions. The proposed controller for the robotic arm and the mobile base motion is trained using a constrained Reinforcement Learning (RL) formulation. We demonstrate its capability in experiments with a quadrupedal robot equipped with an arm. The learned policy achieves a success rate of 91.35% in simulation and at least 80% on hardware in challenging scenarios. Through our extensive hardware experiments, we show that the approach demonstrates high robustness against unknown objects of different masses, materials, sizes, and shapes. It reactively discovers the pushing location and direction, thus achieving contact-rich behavior while observing only the pose of the object. Additionally, we demonstrate the adaptive behavior of the learned policy towards preventing the object from toppling.
HYPERmotion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation
Wang, Jin, Dai, Rui, Wang, Weijie, Rossini, Luca, Ruscelli, Francesco, Tsagarakis, Nikos
Enabling robots to autonomously perform hybrid motions in diverse environments can be beneficial for long-horizon tasks such as material handling, household chores, and work assistance. This requires extensive exploitation of intrinsic motion capabilities, extraction of affordances from rich environmental information, and planning of physical interaction behaviors. Despite recent progress has demonstrated impressive humanoid whole-body control abilities, they struggle to achieve versatility and adaptability for new tasks. In this work, we propose HYPERmotion, a framework that learns, selects and plans behaviors based on tasks in different scenarios. We combine reinforcement learning with whole-body optimization to generate motion for 38 actuated joints and create a motion library to store the learned skills. We apply the planning and reasoning features of the large language models (LLMs) to complex loco-manipulation tasks, constructing a hierarchical task graph that comprises a series of primitive behaviors to bridge lower-level execution with higher-level planning. By leveraging the interaction of distilled spatial geometry and 2D observation with a visual language model (VLM) to ground knowledge into a robotic morphology selector to choose appropriate actions in single- or dual-arm, legged or wheeled locomotion. Experiments in simulation and real-world show that learned motions can efficiently adapt to new tasks, demonstrating high autonomy from free-text commands in unstructured scenes. Videos and website: hy-motion.github.io/
Whole-body MPC for highly redundant legged manipulators: experimental evaluation with a 37 DoF dual-arm quadruped
Dadiotis, Ioannis, Laurenzi, Arturo, Tsagarakis, Nikos
Recent progress in legged locomotion has rendered quadruped manipulators a promising solution for performing tasks that require both mobility and manipulation (loco-manipulation). In the real world, task specifications and/or environment constraints may require the quadruped manipulator to be equipped with high redundancy as well as whole-body motion coordination capabilities. This work presents an experimental evaluation of a whole-body Model Predictive Control (MPC) framework achieving real-time performance on a dual-arm quadruped platform consisting of 37 actuated joints. To the best of our knowledge this is the legged manipulator with the highest number of joints to be controlled with real-time whole-body MPC so far. The computational efficiency of the MPC while considering the full robot kinematics and the centroidal dynamics model builds upon an open-source DDP-variant solver and a state-of-the-art optimal control problem formulation. Differently from previous works on quadruped manipulators, the MPC is directly interfaced with the low-level joint impedance controllers without the need of designing an instantaneous whole-body controller. The feasibility on the real hardware is showcased using the CENTAURO platform for the challenging task of picking a heavy object from the ground. Dynamic stepping (trotting) is also showcased for first time with this robot. The results highlight the potential of replanning with whole-body information in a predictive control loop.
In-Hand Re-grasp Manipulation with Passive Dynamic Actions via Imitation Learning
Wei, Dehao, Sun, Guokang, Ren, Zeyu, Li, Shuang, Shao, Zhufeng, Li, Xiang, Tsagarakis, Nikos, Ma, Shaohua
Re-grasp manipulation leverages on ergonomic tools to assist humans in accomplishing diverse tasks. In certain scenarios, humans often employ external forces to effortlessly and precisely re-grasp tools like a hammer. Previous development on controllers for in-grasp sliding motion using passive dynamic actions (e.g.,gravity) relies on apprehension of finger-object contact information, and requires customized design for individual objects with varied geometry and weight distribution. It limits their adaptability to diverse objects. In this paper, we propose an end-to-end sliding motion controller based on imitation learning (IL) that necessitates minimal prior knowledge of object mechanics, relying solely on object position information. To expedite training convergence, we utilize a data glove to collect expert data trajectories and train the policy through Generative Adversarial Imitation Learning (GAIL). Simulation results demonstrate the controller's versatility in performing in-hand sliding tasks with objects of varying friction coefficients, geometric shapes, and masses. By migrating to a physical system using visual position estimation, the controller demonstrated an average success rate of 86%, surpassing the baseline algorithm's success rate of 35% of Behavior Cloning(BC) and 20% of Proximal Policy Optimization (PPO).
Trajectory Optimization for Quadruped Mobile Manipulators that Carry Heavy Payload
Dadiotis, Ioannis, Laurenzi, Arturo, Tsagarakis, Nikos
This paper presents a simplified model-based trajectory optimization (TO) formulation for motion planning on quadruped mobile manipulators that carry heavy payload of known mass. The proposed payload-aware formulation simultaneously plans locomotion, payload manipulation and considers both robot and payload model dynamics while remaining computationally efficient. At the presence of heavy payload, the approach exhibits reduced leg outstretching (thus increased manipulability) in kinematically demanding motions due to the contribution of payload manipulation in the optimization. The framework's computational efficiency and performance is validated through a number of simulation and experimental studies with the bi-manual quadruped CENTAURO robot carrying on its arms a payload that exceeds 15 % of its mass and traversing non-flat terrain.
Nonlinear Model Predictive Control for Robust Bipedal Locomotion: Exploring Angular Momentum and CoM Height Changes
Ding, Jiatao, Zhou, Chengxu, Xin, Songyan, Xiao, Xiaohui, Tsagarakis, Nikos
-- Human beings can utilize multiple balance strategies, e.g. In this work, we propose a novel Nonlinear Model Predictive Control (NMPC) framework for robust locomotion, with the capabilities of step location adjustment, Center of Mass (CoM) height variation, and angular momentum adaptation. These features are realized by constraining the Zero Moment Point within the support polygon. By using the nonlinear inverted pendulum plus flywheel model, the effects of upper-body rotation and vertical height motion are considered. As a result, the NMPC is formulated as a quadratically constrained quadratic program problem, which is solved fast by sequential quadratic programming. Using this unified framework, robust walking patterns that exploit reactive stepping, body inclination, and CoM height variation are generated based on the state estimation. The adaptability for bipedal walking in multiple scenarios has been demonstrated through simulation studies. Humanoid robots have attracted much attention for their capabilities in accomplishing challenging tasks in real-world environments. With several decades passed, state-of-the-art robot platforms such as ASIMO [1], Atlas [2], W ALK-MAN [3], and CogIMon [4] have been developed for this purpose. However, due to the complex nonlinear dynamics of bipedal locomotion over the walking process, enhancing walking stability, which is among the prerequisites in making humanoids practical, still needs further studies. In this paper, inspired by the fact that human beings can make use of the redundant Degree of Freedom (DoF) and adopt various strategies, such as the ankle, hip, and stepping strategies, to realize balance recovery [5]-[7], we aim to develop a versatile and robust walking pattern generator which can integrate multiple balance strategies in a unified way. To generate the walking pattern in a time-efficient manner, simplified dynamic models have been proposed, among which the Linear Inverted Pendulum Model (LIPM) is widely used [8]. Using the LIPM, Kajita et al. proposed the preview control for Zero Moment Point (ZMP) tracking [9]. By adopting a Linear Quadratic Regulator (LQR) scheme, the ankle torque was adjusted to modulate the ZMP trajectory and Center of Mass (CoM) trajectory. Nevertheless, this strategy can neither modulate the step parameters nor take into consideration the feasibility constraints arisen from actuation limitations and environmental constraints. To overcome this drawback, Wieber et al. proposed a Model Predictive Control (MPC) algorithm to utilize the ankle strategy [10] and then extended it for adjusting step location [11].