Multi-step manipulation task and motion planning guided by video demonstration

Zorina, Kateryna, Kovar, David, Fourmy, Mederic, Lamiraux, Florent, Mansard, Nicolas, Carpentier, Justin, Sivic, Josef, Petrik, Vladimir

arXiv.org Artificial Intelligence 

--This work aims to leverage instructional video to solve complex multi-step task-and-motion planning tasks in robotics. T owards this goal, we propose an extension of the well-established Rapidly-Exploring Random Tree (RRT) planner, which simultaneously grows multiple trees around grasp and release states extracted from the guiding video. Our key novelty lies in combining contact states and 3D object poses extracted from the guiding video with a traditional planning algorithm that allows us to solve tasks with sequential dependencies, for example, if an object needs to be placed at a specific location to be grasped later . We also investigate the generalization capabilities of our approach to go beyond the scene depicted in the instructional video. T o demonstrate the benefits of the proposed video-guided planning approach, we design a new benchmark with three challenging tasks: (i) 3D re-arrangement of multiple objects between a table and a shelf, (ii) multi-step transfer of an object through a tunnel, and (iii) transferring objects using a tray similar to a waiter transfers dishes. We demonstrate the effectiveness of our planning algorithm on several robots, including the Franka Emika Panda and the KUKA KMR iiwa . For a seamless transfer of the obtained plans to the real robot, we develop a trajectory refinement approach formulated as an optimal control problem (OCP). Traditional robot motion planning algorithms seek a collision-free path from a given starting robot configuration to a given goal robot configuration [1]. Despite the large dimensionality of the configuration space, sampling-based motion planning algorithms [2], [3] have shown to be highly effective for solving complex motion planning problems for robots, ranging from six degrees of freedom (DoFs) for industrial manipulators to tens of DoFs for humanoids [4]. Manipulation task-and-motion planning (T AMP) [5] adds an additional complexity to the problem by including movable objects in the state space. This requires the planner to discover the pick-and-place actions that connect the given start and goal robot configurations, bringing the manipulated objects from their start poses to their goal poses. INRIA, Paris This work is part of the AGIMUS project, funded by the European Union under GA no.101070165. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission.