Diffusion Trajectory-guided Policy for Long-horizon Robot Manipulation
Fan, Shichao, Yang, Quantao, Liu, Yajie, Wu, Kun, Che, Zhengping, Liu, Qingjie, Wan, Min
–arXiv.org Artificial Intelligence
Recently, Vision-Language-Action models (VLA) have advanced robot imitation learning, but high data collection costs and limited demonstrations hinder generalization and current imitation learning methods struggle in out-of-distribution scenarios, especially for long-horizon tasks. A key challenge is how to mitigate compounding errors in imitation learning, which lead to cascading failures over extended trajectories. To address these challenges, we propose the Diffusion Trajectory-guided Policy (DTP) framework, which generates 2D trajectories through a diffusion model to guide policy learning for long-horizon tasks. By leveraging task-relevant trajectories, DTP provides trajectory-level guidance to reduce error accumulation. Our two-stage approach first trains a generative vision-language model to create diffusion-based trajectories, then refines the imitation policy using them. Experiments on the CALVIN benchmark show that DTP outperforms state-of-the-art baselines by 25% in success rate, starting from scratch without external pretraining. Moreover, DTP significantly improves real-world robot performance.
arXiv.org Artificial Intelligence
Feb-14-2025
- Country:
- Europe (0.28)
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.93)
- Natural Language > Large Language Model (0.68)
- Robots (1.00)
- Information Technology > Artificial Intelligence