Goto

Collaborating Authors

 Lee, Dongheui


Variable Stiffness for Robust Locomotion through Reinforcement Learning

arXiv.org Artificial Intelligence

Reinforcement-learned locomotion enables legged robots to perform highly dynamic motions but often accompanies time-consuming manual tuning of joint stiffness. This paper introduces a novel control paradigm that integrates variable stiffness into the action space alongside joint positions, enabling grouped stiffness control such as per-joint stiffness (PJS), per-leg stiffness (PLS) and hybrid joint-leg stiffness (HJLS). We show that variable stiffness policies, with grouping in per-leg stiffness (PLS), outperform position-based control in velocity tracking and push recovery. In contrast, HJLS excels in energy efficiency. Furthermore, our method showcases robust walking behaviour on diverse outdoor terrains by sim-to-real transfer, although the policy is sorely trained on a flat floor. Our approach simplifies design by eliminating per-joint stiffness tuning while keeping competitive results with various metrics.


REASSEMBLE: A Multimodal Dataset for Contact-rich Robotic Assembly and Disassembly

arXiv.org Artificial Intelligence

Motion Policy Learning 1. Idle Temporal Action 2. Pick Ethernet In REASSEMBLE, we focus on creating a dataset for contact-rich manipulation tasks. We leverage the well-established NIST Assembly Task Board #1 [1] to facilitate deployment of learned algorithms across different research institutes. The dataset includes data from various modalities, such as RGB cameras and robot proprioception, which are common in other works. Additionally, we incorporate event cameras, a force and torque sensor, and microphones, which are less common in manipulation datasets, and which we hope will be beneficial for the community. Abstract--Robotic manipulation remains a core challenge in advancing robotic manipulation in complex, real-world scenarios. In contrast, more complex challenges, such as longhorizon assemblies, and nut threading, as shown in Figure 1. These and contact-rich manipulation tasks, remain less explored. What sets REASSEMBLE which require understanding of interaction dynamics and apart from other robot manipulation datasets is its focus on the ability to plan and execute precise, goal-oriented action multimodal data for holistic learning frameworks. Recent studies, including [11], have highlighted a comparison of commonly used robot learning datasets the limitations of current state-of-the-art algorithms to address and their properties in Table I.


ConditionNET: Learning Preconditions and Effects for Execution Monitoring

arXiv.org Artificial Intelligence

The introduction of robots into everyday scenarios necessitates algorithms capable of monitoring the execution of tasks. In this paper, we propose ConditionNET, an approach for learning the preconditions and effects of actions in a fully data-driven manner. We develop an efficient vision-language model and introduce additional optimization objectives during training to optimize for consistent feature representations. ConditionNET explicitly models the dependencies between actions, preconditions, and effects, leading to improved performance. We evaluate our model on two robotic datasets, one of which we collected for this paper, containing 406 successful and 138 failed teleoperated demonstrations of a Franka Emika Panda robot performing tasks like pouring and cleaning the counter. We show in our experiments that ConditionNET outperforms all baselines on both anomaly detection and phase prediction tasks. Furthermore, we implement an action monitoring system on a real robot to demonstrate the practical applicability of the learned preconditions and effects. Our results highlight the potential of ConditionNET for enhancing the reliability and adaptability of robots in real-world environments. The data is available on the project website: https://dsliwowski1.github.io/ConditionNET_page.


Enhancing Model-Based Step Adaptation for Push Recovery through Reinforcement Learning of Step Timing and Region

arXiv.org Artificial Intelligence

This paper introduces a new approach to enhance the robustness of humanoid walking under strong perturbations, such as substantial pushes. Effective recovery from external disturbances requires bipedal robots to dynamically adjust their stepping strategies, including footstep positions and timing. Unlike most advanced walking controllers that restrict footstep locations to a predefined convex region, substantially limiting recoverable disturbances, our method leverages reinforcement learning to dynamically adjust the permissible footstep region, expanding it to a larger, effectively non-convex area and allowing cross-over stepping, which is crucial for counteracting large lateral pushes. Additionally, our method adapts footstep timing in real time to further extend the range of recoverable disturbances. Based on these adjustments, feasible footstep positions and DCM trajectory are planned by solving a QP. Finally, we employ a DCM controller and an inverse dynamics whole-body control framework to ensure the robot effectively follows the trajectory.


I-CTRL: Imitation to Control Humanoid Robots Through Constrained Reinforcement Learning

arXiv.org Artificial Intelligence

This paper addresses the critical need for refining robot motions that, despite achieving a high visual similarity through human-to-humanoid retargeting methods, fall short of practical execution in the physical realm. Existing techniques in the graphics community often prioritize visual fidelity over physics-based feasibility, posing a significant challenge for deploying bipedal systems in practical applications. Our research introduces a constrained reinforcement learning algorithm to produce physics-based high-quality motion imitation onto legged humanoid robots that enhance motion resemblance while successfully following the reference human trajectory. We name our framework: I-CTRL. By reformulating the motion imitation problem as a constrained refinement over non-physics-based retargeted motions, our framework excels in motion imitation with simple and unique rewards that generalize across four robots. Moreover, our framework can follow large-scale motion datasets with a unique RL agent. The proposed approach signifies a crucial step forward in advancing the control of bipedal robots, emphasizing the importance of aligning visual and physical realism for successful motion imitation.


Shared Autonomy via Variable Impedance Control and Virtual Potential Fields for Encoding Human Demonstration

arXiv.org Artificial Intelligence

This article introduces a framework for complex human-robot collaboration tasks, such as the co-manufacturing of furniture. For these tasks, it is essential to encode tasks from human demonstration and reproduce these skills in a compliant and safe manner. Therefore, two key components are addressed in this work: motion generation and shared autonomy. We propose a motion generator based on a time-invariant potential field, capable of encoding wrench profiles, complex and closed-loop trajectories, and additionally incorporates obstacle avoidance. Additionally, the paper addresses shared autonomy (SA) which enables synergetic collaboration between human operators and robots by dynamically allocating authority. Variable impedance control (VIC) and force control are employed, where impedance and wrench are adapted based on the human-robot autonomy factor derived from interaction forces. System passivity is ensured by an energy-tank based task passivation strategy. The framework's efficacy is validated through simulations and an experimental study employing a Franka Emika Research 3 robot. More information can be found on the project website https://shailjadav.github.io/SALADS/


Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction

arXiv.org Artificial Intelligence

Integrating robots into populated environments is a complex challenge that requires an understanding of human social dynamics. In this work, we propose to model social motion forecasting in a shared human-robot representation space, which facilitates us to synthesize robot motions that interact with humans in social scenarios despite not observing any robot in the motion training. We develop a transformer-based architecture called ECHO, which operates in the aforementioned shared space to predict the future motions of the agents encountered in social scenarios. Contrary to prior works, we reformulate the social motion problem as the refinement of the predicted individual motions based on the surrounding agents, which facilitates the training while allowing for single-motion forecasting when only one human is in the scene. We evaluate our model in multi-person and human-robot motion forecasting tasks and obtain state-of-the-art performance by a large margin while being efficient and performing in real-time. Additionally, our qualitative results showcase the effectiveness of our approach in generating human-robot interaction behaviors that can be controlled via text commands.


ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space

arXiv.org Artificial Intelligence

This paper introduces a novel deep-learning approach for human-to-robot motion retargeting, enabling robots to mimic human poses accurately. Contrary to prior deep-learning-based works, our method does not require paired human-to-robot data, which facilitates its translation to new robots. First, we construct a shared latent space between humans and robots via adaptive contrastive learning that takes advantage of a proposed cross-domain similarity metric between the human and robot poses. Additionally, we propose a consistency term to build a common latent space that captures the similarity of the poses with precision while allowing direct robot motion control from the latent space. For instance, we can generate in-between motion through simple linear interpolation between two projected human poses. We conduct a comprehensive evaluation of robot control from diverse modalities (i.e., texts, RGB videos, and key poses), which facilitates robot control for non-expert users. Our model outperforms existing works regarding human-to-robot retargeting in terms of efficiency and precision. Finally, we implemented our method in a real robot with self-collision avoidance through a whole-body controller to showcase the effectiveness of our approach. More information on our website https://evm7.github.io/UnsH2R/


HOI4ABOT: Human-Object Interaction Anticipation for Human Intention Reading Collaborative roBOTs

arXiv.org Artificial Intelligence

Robots are becoming increasingly integrated into our lives, assisting us in various tasks. To ensure effective collaboration between humans and robots, it is essential that they understand our intentions and anticipate our actions. In this paper, we propose a Human-Object Interaction (HOI) anticipation framework for collaborative robots. We propose an efficient and robust transformer-based model to detect and anticipate HOIs from videos. This enhanced anticipation empowers robots to proactively assist humans, resulting in more efficient and intuitive collaborations. Our model outperforms state-of-the-art results in HOI detection and anticipation in VidHOI dataset with an increase of 1.76% and 1.04% in mAP respectively while being 15.4 times faster. We showcase the effectiveness of our approach through experimental results in a real robot, demonstrating that the robot's ability to anticipate HOIs is key for better Human-Robot Interaction. More information can be found on our project webpage: https://evm7.github.io/HOI4ABOT_page/


Orientation Control with Variable Stiffness Dynamical Systems

arXiv.org Artificial Intelligence

Recently, several approaches have attempted to combine motion generation and control in one loop to equip robots with reactive behaviors, that cannot be achieved with traditional time-indexed tracking controllers. These approaches however mainly focused on positions, neglecting the orientation part which can be crucial to many tasks e.g. screwing. In this work, we propose a control algorithm that adapts the robot's rotational motion and impedance in a closed-loop manner. Given a first-order Dynamical System representing an orientation motion plan and a desired rotational stiffness profile, our approach enables the robot to follow the reference motion with an interactive behavior specified by the desired stiffness, while always being aware of the current orientation, represented as a Unit Quaternion (UQ). We rely on the Lie algebra to formulate our algorithm, since unlike positions, UQ feature constraints that should be respected in the devised controller. We validate our proposed approach in multiple robot experiments, showcasing the ability of our controller to follow complex orientation profiles, react safely to perturbations, and fulfill physical interaction tasks.