Robust Planning for Autonomous Driving via Mixed Adversarial Diffusion Predictions

Zhao, Albert, Soatto, Stefano

arXiv.org Artificial Intelligence 

-- We describe a robust planning method for autonomous driving that mixes normal and adversarial agent predictions output by a diffusion model trained for motion prediction. We first train a diffusion model to learn an unbiased distribution of normal agent behaviors. We then generate a distribution of adversarial predictions by biasing the diffusion model at test time to generate predictions that are likely to collide with a candidate plan. We score plans using expected cost with respect to a mixture distribution of normal and adversarial predictions, leading to a planner that is robust against adversarial behaviors but not overly conservative when agents behave normally. Unlike current approaches, we do not use risk measures that over-weight adversarial behaviors while placing little to no weight on low-cost normal behaviors or use hard safety constraints that may not be appropriate for all driving scenarios. We show the effectiveness of our method on single-agent and multi-agent jaywalking scenarios as well as a red light violation scenario. Predicting agent behaviors [1]-[4] is a key part of the autonomous driving pipeline. Recently, deep learning-based motion prediction methods [5]-[7] have predicted accurate multimodal distributions of agent behavior.