Goto

Collaborating Authors

 Calinon, Sylvain


Diffusion-based Virtual Fixtures

arXiv.org Artificial Intelligence

For a long time, robotics considered objects in the environment primarily as obstacles and the goal was to avoid contact due to modeling and sensing difficulties. However, the specifying only the target and obstacle regions, a smooth flow trend has shifted towards embracing contact due to increasing field on the tangent space can guide agents to the closest target interest in manipulation, tactile robotics, and surface inspection while avoiding the restricted zones and maintaining contact tasks. Consequently, robots physically interact with their with the surface, as depicted in Figure 1-c. For addressing surrounding environment that can charecterized by curved these challenges, we propose a surface virtual fixture method surfaces, which can also be soft and fragile (e.g., surgical expecting surfaces as possibly noisy and partial point clouds robotics). However, safety in these tasks remains a major collected in runtime using an off-the-shelf camera attached concern during deployment in real-world as they involve to the robot. Next, we segment the point cloud into a set forceful interactions. Considering that a significant percentage of regions with their specified behavior. This segmentation of recent approaches propose learning-based controllers, and can come from learning-based methods using vision [7] or that the majority of shared control and teleoperation tasks geometry [8]. Alternatively, one can use virtual or real-world depend on the operator's expertise or skills, safety takes a expert annotations [5], [6], possibly in combination with more central role in assistive systems.


Learning Goal-oriented Bimanual Dough Rolling Using Dynamic Heterogeneous Graph Based on Human Demonstration

arXiv.org Artificial Intelligence

Soft object manipulation poses significant challenges for robots, requiring effective techniques for state representation and manipulation policy learning. State representation involves capturing the dynamic changes in the environment, while manipulation policy learning focuses on establishing the relationship between robot actions and state transformations to achieve specific goals. To address these challenges, this research paper introduces a novel approach: a dynamic heterogeneous graph-based model for learning goal-oriented soft object manipulation policies. The proposed model utilizes graphs as a unified representation for both states and policy learning. By leveraging the dynamic graph, we can extract crucial information regarding object dynamics and manipulation policies. Furthermore, the model facilitates the integration of demonstrations, enabling guided policy learning. To evaluate the efficacy of our approach, we designed a dough rolling task and conducted experiments using both a differentiable simulator and a real-world humanoid robot. Additionally, several ablation studies were performed to analyze the effect of our method, demonstrating its superiority in achieving human-like behavior.


Robust Manipulation Primitive Learning via Domain Contraction

arXiv.org Artificial Intelligence

Robot manipulation usually involves multiple different manipulation primitives, such as Push and Pivot, leading to hybrid and long-horizon characteristics. This poses significant challenges to most planning and control approaches. Instead of treating long-horizon manipulation as a whole, it can be decomposed into several simple manipulation primitives and then sequenced using PDDL planners [1, 2, 3] or Large Language Models [4, 5]. Although such manipulation primitives usually have lowto-medium-dimensional state and action spaces, the breaking and establishment of contact make it tough for most motion planning techniques. Gradient-based techniques suffer from vanishing gradients when contact breaks, while sampling-based techniques struggle with the combinatorial complexity of multiple contact modes, i.e., sticking and sliding. This leads to time-consuming online replanning in the real world for contact-rich manipulation, limiting the real-time reactiveness of robots in coping with uncertainties and disturbances. Learning manipulation primitives that can quickly react to the surroundings, therefore, makes a lot of sense. Since the learned manipulation primitives will be sequenced by symbolic planners, which have no information about the geometric/motion level, the learned manipulation primitive should be robust to diverse instances with varied physical parameters, such as shape, mass, and friction coefficient. For example, once the push primitive is scheduled by the high-level symbolic planner, it should be able to Figure 2: Illustration of DA, DR and DC.


Energy-based Contact Planning under Uncertainty for Robot Air Hockey

arXiv.org Artificial Intelligence

Planning robot contact often requires reasoning over a horizon to anticipate outcomes, making such planning problems computationally expensive. In this letter, we propose a learning framework for efficient contact planning in real-time subject to uncertain contact dynamics. We implement our approach for the example task of robot air hockey. Based on a learned stochastic model of puck dynamics, we formulate contact planning for shooting actions as a stochastic optimal control problem with a chance constraint on hitting the goal. To achieve online re-planning capabilities, we propose to train an energy-based model to generate optimal shooting plans in real time. The performance of the trained policy is validated %in experiments both in simulation and on a real-robot setup. Furthermore, our approach was tested in a competitive setting as part of the NeurIPS 2023 Robot Air Hockey Challenge.


Robust Pushing: Exploiting Quasi-static Belief Dynamics and Contact-informed Optimization

arXiv.org Artificial Intelligence

Non-prehensile manipulation such as pushing is typically subject to uncertain, non-smooth dynamics. However, modeling the uncertainty of the dynamics typically results in intractable belief dynamics, making data-efficient planning under uncertainty difficult. This article focuses on the problem of efficiently generating robust open-loop pushing plans. First, we investigate how the belief over object configurations propagates through quasi-static contact dynamics. We exploit the simplified dynamics to predict the variance of the object configuration without sampling from a perturbation distribution. In a sampling-based trajectory optimization algorithm, the gain of the variance is constrained in order to enforce robustness of the plan. Second, we propose an informed trajectory sampling mechanism for drawing robot trajectories that are likely to make contact with the object. This sampling mechanism is shown to significantly improve chances of finding robust solutions, especially when making-and-breaking contacts is required. We demonstrate that the proposed approach is able to synthesize bi-manual pushing trajectories, resulting in successful long-horizon pushing maneuvers without exteroceptive feedback such as vision or tactile feedback. We furthermore deploy the proposed approach in a model-predictive control scheme, demonstrating additional robustness against unmodeled perturbations.


Logic Learning from Demonstrations for Multi-step Manipulation Tasks in Dynamic Environments

arXiv.org Artificial Intelligence

Learning from Demonstration (LfD) stands as an efficient framework for imparting human-like skills to robots. Nevertheless, designing an LfD framework capable of seamlessly imitating, generalizing, and reacting to disturbances for long-horizon manipulation tasks in dynamic environments remains a challenge. To tackle this challenge, we present Logic Dynamic Movement Primitives (Logic-DMP), which combines Task and Motion Planning (TAMP) with an optimal control formulation of DMP, allowing us to incorporate motion-level via-point specifications and to handle task-level variations or disturbances in dynamic environments. We conduct a comparative analysis of our proposed approach against several baselines, evaluating its generalization ability and reactivity across three long-horizon manipulation tasks. Our experiment demonstrates the fast generalization and reactivity of Logic-DMP for handling task-level variants and disturbances in long-horizon manipulation tasks.


Logic-Skill Programming: An Optimization-based Approach to Sequential Skill Planning

arXiv.org Artificial Intelligence

Recent advances in robot skill learning have unlocked the potential to construct task-agnostic skill libraries, facilitating the seamless sequencing of multiple simple manipulation primitives (aka. skills) to tackle significantly more complex tasks. Nevertheless, determining the optimal sequence for independently learned skills remains an open problem, particularly when the objective is given solely in terms of the final geometric configuration rather than a symbolic goal. To address this challenge, we propose Logic-Skill Programming (LSP), an optimization-based approach that sequences independently learned skills to solve long-horizon tasks. We formulate a first-order extension of a mathematical program to optimize the overall cumulative reward of all skills within a plan, abstracted by the sum of value functions. To solve such programs, we leverage the use of tensor train factorization to construct the value function space, and rely on alternations between symbolic search and skill value optimization to find the appropriate skill skeleton and optimal subgoal sequence. Experimental results indicate that the obtained value functions provide a superior approximation of cumulative rewards compared to state-of-the-art reinforcement learning methods. Furthermore, we validate LSP in three manipulation domains, encompassing both prehensile and non-prehensile primitives. The results demonstrate its capability to identify the optimal solution over the full logic and geometric path. The real-robot experiments showcase the effectiveness of our approach to cope with contact uncertainty and external disturbances in the real world.


Configuration Space Distance Fields for Manipulation Planning

arXiv.org Artificial Intelligence

The signed distance field is a popular implicit shape representation in robotics, providing geometric information about objects and obstacles in a form that can easily be combined with control, optimization and learning techniques. Most often, SDFs are used to represent distances in task space, which corresponds to the familiar notion of distances that we perceive in our 3D world. However, SDFs can mathematically be used in other spaces, including robot configuration spaces. For a robot manipulator, this configuration space typically corresponds to the joint angles for each articulation of the robot. While it is customary in robot planning to express which portions of the configuration space are free from collision with obstacles, it is less common to think of this information as a distance field in the configuration space. In this paper, we demonstrate the potential of considering SDFs in the robot configuration space for optimization, which we call the configuration space distance field. Similarly to the use of SDF in task space, CDF provides an efficient joint angle distance query and direct access to the derivatives. Most approaches split the overall computation with one part in task space followed by one part in configuration space. Instead, CDF allows the implicit structure to be leveraged by control, optimization, and learning problems in a unified manner. In particular, we propose an efficient algorithm to compute and fuse CDFs that can be generalized to arbitrary scenes. A corresponding neural CDF representation using multilayer perceptrons is also presented to obtain a compact and continuous representation while improving computation efficiency. We demonstrate the effectiveness of CDF with planar obstacle avoidance examples and with a 7-axis Franka robot in inverse kinematics and manipulation planning tasks.


A Minimum-Jerk Approach to Handle Singularities in Virtual Fixtures

arXiv.org Artificial Intelligence

Implementing virtual fixtures in guiding tasks constrains the movement of the robot's end effector to specific curves within its workspace. However, incorporating guiding frameworks may encounter discontinuities when optimizing the reference target position to the nearest point relative to the current robot position. This article aims to give a geometric interpretation of such discontinuities, with specific reference to the commonly adopted Gauss-Newton algorithm. The effect of such discontinuities, defined as Euclidean Distance Singularities, is experimentally proved. We then propose a solution that is based on a Linear Quadratic Tracking problem with minimum jerk command, then compare and validate the performances of the proposed framework in two different human-robot interaction scenarios.


An Optimal Control Formulation of Tool Affordance Applied to Impact Tasks

arXiv.org Artificial Intelligence

Humans use tools to complete impact-aware tasks such as hammering a nail or playing tennis. The postures adopted to use these tools can significantly influence the performance of these tasks, where the force or velocity of the hand holding a tool plays a crucial role. The underlying motion planning challenge consists of grabbing the tool in preparation for the use of this tool with an optimal body posture. Directional manipulability describes the dexterity of force and velocity in a joint configuration along a specific direction. In order to take directional manipulability and tool affordances into account, we apply an optimal control method combining iterative linear quadratic regulator(iLQR) with the alternating direction method of multipliers(ADMM). Our approach considers the notion of tool affordances to solve motion planning problems, by introducing a cost based on directional velocity manipulability. The proposed approach is applied to impact tasks in simulation and on a real 7-axis robot, specifically in a nail-hammering task with the assistance of a pilot hole. Our comparison study demonstrates the importance of maximizing directional manipulability in impact-aware tasks.