BEV-VLM: Trajectory Planning via Unified BEV Abstraction

Chen, Guancheng, Yang, Sheng, Zhan, Tong, Wang, Jian

arXiv.org Artificial Intelligence 

ABSTRACT This paper introduces BEV -VLM, a novel framework for trajectory planning in autonomous driving that leverages Vision-Language Models (VLMs) with Bird's-Eye View (BEV) feature maps as visual inputs. Unlike conventional approaches that rely solely on raw visual data such as camera images, our method utilizes highly compressed and informative BEV representations, which are generated by fusing multi-modal sensor data (e.g., camera and LiDAR) and aligning them with HD Maps. This unified BEV -HD Map format provides a geometrically consistent and rich scene description, enabling VLMs to perform accurate trajectory planning. Experimental results on the nuScenes dataset demonstrate 44.8% improvements in planning accuracy and complete collision avoidance. Our work highlights that VLMs can effectively interpret processed visual representations like BEV features, expanding their applicability beyond raw images in trajectory planning. Index T erms-- Autonomous Driving, Vision-Language Model, Multi-Modal Learning 1. INTRODUCTION In recent years, the pursuit of advanced autonomous driving (AD) has attracted extensive attention, with Vision-Language Models (VLMs) emerging as a promising pathway, owing to their inherent cognitive capabilities from pre-training that enable effective application in real-world scenarios. While existing research has demonstrated the feasibility and reliability of using VLMs for path planning by feeding visual camera images, these approaches suffer from two key limitations: they rely solely on camera data and thus lack integration with other modalities, such as LiDAR point clouds, and they fail to explore VLMs' potential for planning based on Bird's-Eye View (BEV) features. To address these gaps, this work avoids the direct use of raw visual signals (e.g., camera images) as VLM inputs.