AdaWM: Adaptive World Model based Planning for Autonomous Driving

Wang, Hang, Ye, Xin, Tao, Feng, Pan, Chenbin, Mallik, Abhirup, Yaman, Burhaneddin, Ren, Liu, Zhang, Junshan

arXiv.org Artificial Intelligence 

World model based reinforcement learning (RL) has emerged as a promising approach for autonomous driving, which learns a latent dynamics model and uses it to train a planning policy. To speed up the learning process, the pretrain-finetune paradigm is often used, where online RL is initialized by a pretrained model and a policy learned offline. However, naively performing such initialization in RL may result in dramatic performance degradation during the online interactions in the new task. To tackle this challenge, we first analyze the performance degradation and identify two primary root causes therein: the mismatch of the planning policy and the mismatch of the dynamics model, due to distribution shift. We further analyze the effects of these factors on performance degradation during finetuning, and our findings reveal that the choice of finetuning strategies plays a pivotal role in mitigating these effects. We then introduce AdaWM, an Adaptive World Model based planning method, featuring two key steps: (a) mismatch identification, which quantifies the mismatches and informs the finetuning strategy, and (b) alignment-driven finetuning, which selectively updates either the policy or the model as needed using efficient low-rank updates. Extensive experiments on the challenging CARLA driving tasks demonstrate that AdaWM significantly improves the finetuning process, resulting in more robust and efficient performance in autonomous driving systems. Automated vehicles (AVs) are poised to revolutionize future mobility systems with enhanced safety and efficiency Yurtsever et al. (2020); Kalra & Paddock (2016); Maurer et al. (2016). Despite significant progress Teng et al. (2023); Hu et al. (2023); Jiang et al. (2023), developing AVs capable of navigating complex, diverse real-world scenarios remains challenging, particularly in unforeseen situations Campbell et al. (2010); Chen et al. (2024). Autonomous vehicles must learn the complex dynamics of environments, predict future scenarios accurately and swiftly, and take timely actions such as emergency braking. Thus motivated, in this work, we devise adaptive world model to advance embodied AI and improve the planning capability of autonomous driving systems. World model (WM) based reinforcement learning (RL) has emerged as a promising self-supervised approach for autonomous driving Chen et al. (2024); Wang et al. (2024); Guan et al. (2024); Li et al. (2024).