Goto

Collaborating Authors

 diffusion dynamic model


Conformal Prediction for Uncertainty-Aware Planning with Diffusion Dynamics Model

Neural Information Processing Systems

Robotic applications often involve working in environments that are uncertain, dynamic, and partially observable. Recently, diffusion models have been proposed for learning trajectory prediction models trained from expert demonstrations, which can be used for planning in robot tasks. Such models have demonstrated a strong ability to overcome challenges such as multi-modal action distributions, high-dimensional output spaces, and training instability. It is crucial to quantify the uncertainty of these dynamics models when using them for planning. In this paper, we quantify the uncertainty of diffusion dynamics models using Conformal Prediction (CP).


Conformal Prediction for Uncertainty-Aware Planning with Diffusion Dynamics Model

Neural Information Processing Systems

Robotic applications often involve working in environments that are uncertain, dynamic, and partially observable. Recently, diffusion models have been proposed for learning trajectory prediction models trained from expert demonstrations, which can be used for planning in robot tasks. Such models have demonstrated a strong ability to overcome challenges such as multi-modal action distributions, high-dimensional output spaces, and training instability. It is crucial to quantify the uncertainty of these dynamics models when using them for planning. In this paper, we quantify the uncertainty of diffusion dynamics models using Conformal Prediction (CP).


Diffusion Model Predictive Control

Zhou, Guangyao, Swaminathan, Sivaramakrishnan, Raju, Rajkumar Vasudeva, Guntupalli, J. Swaroop, Lehrach, Wolfgang, Ortiz, Joseph, Dedieu, Antoine, Lázaro-Gredilla, Miguel, Murphy, Kevin

arXiv.org Artificial Intelligence

We propose Diffusion Model Predictive Control (D-MPC), a novel MPC approach that learns a multi-step action proposal and a multi-step dynamics model, both using diffusion models, and combines them for use in online MPC. On the popular D4RL benchmark, we show performance that is significantly better than existing model-based offline planning methods using MPC and competitive with state-of-the-art (SOTA) model-based and model-free reinforcement learning methods. We additionally illustrate D-MPC's ability to optimize novel reward functions at run time and adapt to novel dynamics, and highlight its advantages compared to existing diffusion-based planning baselines.