More Than Meets the Eye? Uncovering the Reasoning-Planning Disconnect in Training Vision-Language Driving Models
Song, Xurui, Huai, Shuo, Jiang, JingJing, Kong, Jiayi, Luo, Jun
–arXiv.org Artificial Intelligence
Vision-Language Model (VLM) driving agents promise explainable end-to-end autonomy by first producing natural-language reasoning and then predicting trajectory planning. However, whether planning is causally driven by this reasoning remains a critical but unverified assumption. To investigate this, we build Drive-Mind, a large-scale driving Visual Question Answering (VQA) corpus with plan-aligned Chain-of-Thought (CoT), automatically generated from nuPlan. Our data generation process converts sensors and annotations into structured inputs and, crucially, separates priors from to-be-reasoned signals, enabling clean information ablations. Using DriveMind, we train representative VLM agents with Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO) and evaluate them with nuPlan's metrics. Our results, unfortunately, indicate a consistent causal disconnect in reasoning-planning: removing ego/navigation priors causes large drops in planning scores, whereas removing CoT produces only minor changes. Attention analysis further shows that planning primarily focuses on priors rather than the CoT. Based on this evidence, we propose the Reasoning-Planning Decoupling Hypothesis, positing that the training-yielded reasoning is an ancillary byproduct rather than a causal mediator. To enable efficient diagnosis, we also introduce a novel, training-free probe that measures an agent's reliance on priors by evaluating its planning robustness against minor input perturbations. In summary, we provide the community with a new dataset and a diagnostic tool to evaluate the causal fidelity of future models. End-to-end autonomous driving learns planning directly from sensor data and has attracted sustained attention in both academia and industry commaai (2025); Chen et al. (2024); Hu et al. (2023); Jiang et al. (2023). Recent studies explore Vision Language Model (VLM) driving agents that combine the reasoning capability of large language models (LLMs) with visual perception in order to approximate human driving Wen et al. (2024); Zhang et al. (2024a). Chain of Thought (CoT) Wei et al. (2022) has been shown to enhance reasoning in LLMs Feng et al. (2023), and it is increasingly adopted in VLM driving agents to make the sequence of perception, analysis, and decision explicit Sima et al. (2025); Tian et al. (2024); Wang et al. (2024). The intention is to strengthen planning while improving interpretability and controllability. In this paradigm, the model generates a response that first articulates a CoT for reasoning, followed by the final planning trajectory. Consequently, planning is taken for granted as causally driven through the preceding CoT reasoning.
arXiv.org Artificial Intelligence
Oct-7-2025
- Country:
- Asia > Singapore (0.04)
- Europe
- Austria > Vienna (0.14)
- Middle East > Malta
- Eastern Region > Northern Harbour District > St. Julian's (0.04)
- Switzerland (0.04)
- North America > United States
- California > Santa Clara County
- Mountain View (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- California > Santa Clara County
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Transportation > Ground > Road (0.68)
- Technology: