FP3: A 3D Foundation Policy for Robotic Manipulation
Yang, Rujia, Chen, Geng, Wen, Chuan, Gao, Yang
–arXiv.org Artificial Intelligence
FP3 supports data-efficient fine-tuning for downstream tasks, while demonstrating superior generalizability to unseen environments and novel objects. Abstract --Following its success in natural language processing and computer vision, foundation models that are pre-trained on large-scale multi-task datasets have also shown great potential in robotics. However, most existing robot foundation models rely solely on 2D image observations, ignoring 3D geometric information, which is essential for robots to perceive and reason about the 3D world. In this paper, we introduce FP3, a first denotes equal contribution. FP3 builds on a scalable diffusion transformer architecture and is pre-trained on 60k trajectories with point cloud observations. With the model design and diverse pre-training data, FP3 can be efficiently fine-tuned for downstream tasks while exhibiting strong generalization capabilities. Experiments on real robots demonstrate that with only 80 demonstrations, FP3 is able to learn a new task with over 90% success rates in novel environments with unseen objects, significantly surpassing existing robot foundation models. Visualizations and code are available at: FP3. I NTRODUCTION Learning-based policies have shown great effectiveness in robotic manipulation [6, 80, 12, 75, 36, 3]. However, these learned policies often show limited or even zero generalization capability to unseen scenarios, new objects, and distractors [66]. Additionally, most current methods are trained on single or few tasks[12, 75], requiring a relatively large amount of expert demonstrations (usually about 200 episodes) to learn a new task.
arXiv.org Artificial Intelligence
Mar-11-2025