FlowBot3D: Learning 3D Articulation Flow to Manipulate Articulated Objects

Eisner, Ben, Zhang, Harry, Held, David

arXiv.org Artificial Intelligence 

We propose a visionbased system that learns to predict the potential motions of the parts of a variety of articulated objects to guide downstream motion planning of the system to articulate the objects. To predict the object motions, we train a neural network to output a dense vector field representing the point-wise motion direction of the points in the point cloud under articulation. We then deploy an analytical motion planner based on this vector field to achieve a policy that yields maximum articulation. We train a Figure 1: FlowBot3D in action. The system first observes the initial configuration single vision model entirely in simulation across all categories of the object of interest, estimates the per-point articulation of objects, and we demonstrate the capability of our system flow of the point cloud (3DAF), then executes the action based on to generalize to unseen object instances and novel categories in the selected flow vector. Here, the red vectors represent the direction both simulation and the real world using the trained model for of flow of each point (object points appear in blue); the magnitude of all categories, deploying our policy on a Sawyer robot with no the vector corresponds to the relative magnitude of the motion that finetuning. Results show that our system achieves state-of-theart point experiences as the object articulates.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found