TEDi Policy: Temporally Entangled Diffusion for Robotic Control

Høeg, Sigmund H., Tingelstad, Lars

arXiv.org Artificial Intelligence 

Recently, diffusion models have proven powerful for robotic imitation learning, mainly due to their ability to express complex and multimodal distributions [1, 2]. Chi et al. [1], with Diffusion Policy, show that diffusion models excel at imitation learning by surpassing the previous state-of-the-art imitation learning methods by a large margin. A limitation of diffusion models is that multiple iterations are needed to obtain a clean prediction, where each iteration requires evaluating a neural network, which is typically large in size. This limits the application of diffusion-based policies to environments with fast dynamics that require fast control frequencies, restricting them to more static tasks, such as pick-and-place operations. Furthermore, the scarcity of computational resources onboard mobile robots further motivates the need to minimize the computation required to predict actions using diffusion-based policies. Several techniques to reduce the required steps while preserving the performance of diffusion-based imitation learning policies have been proposed [2, 3], mainly inspired by techniques developed for speeding up image-generation diffusion models [4, 5, 6]. Still, there are few examples of improvements specific to sequence-generating diffusion models.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found