Goto

Collaborating Authors

 adversarial vehicle


LinguaSim: Interactive Multi-Vehicle Testing Scenario Generation via Natural Language Instruction Based on Large Language Models

Shi, Qingyuan, Meng, Qingwen, Cheng, Hao, Xu, Qing, Wang, Jianqiang

arXiv.org Artificial Intelligence

This layer contains the information of the background adversarial vehicles whose behaviors are not directly guided by LinguaSim. These vehicles are automatically generated and placed around the ego vehicle and the guided adversarial vehicles by LLM agent Chaos Maker, and roam aimlessly on the given map. The background vehicles significantly increase the uncertainty and complexity of the generated scenarios. B. Adversarial Behavior Generation Compared to other state-of-the-art methods for generating 3D realistic scenarios from natural language descriptions, LinguaSim achieves a higher level of realism, flexibility, and interactivity due to the innovative structure of its Action Generator agent. The detailed workflow of this component will be elaborated further in this section, with a simplified operational logic of the Action Generator illustrated in Figure 1. Figure 1: The basic workflow of module Action Generator To establish a solid foundation for the Action Generator, a retrieval database was constructed to store various behaviors available for the guided adversarial vehicles. Each behavior in the database is referred to as an Atomic Behavior, serving as a fundamental component in the subsequent process. As illustrated in Figure 1, each Atomic Behavior comprises three essential parts: 1) Agent Selection: An autonomous driving agent is selected to guide the adversarial vehicle to which Figure 1: An example of the Behavior T opology W eb generated by the Action Generator the Atomic Behavior is applied. LinguaSim includes various predefined agents, such as the basic CARLA built-in agent that follows a given route, an auto cruise control (ACC) agent that follows the vehicle in front, or the PlanT agent, an imitation-learning-based planning algorithm developed by Renz, Chitta et al. [10]. These agents serve different purposes; for example, the F ollow V ehicle behavior uses the ACC agent, while the PlanT agent is often used for less aggressive behaviors to mimic cautious drivers.


LD-Scene: LLM-Guided Diffusion for Controllable Generation of Adversarial Safety-Critical Driving Scenarios

Peng, Mingxing, Xie, Yuting, Guo, Xusen, Yao, Ruoyu, Yang, Hai, Ma, Jun

arXiv.org Artificial Intelligence

Ensuring the safety and robustness of autonomous driving systems necessitates a comprehensive evaluation in safety-critical scenarios. However, these safety-critical scenarios are rare and difficult to collect from real-world driving data, posing significant challenges to effectively assessing the performance of autonomous vehicles. Typical existing methods often suffer from limited controllability and lack user-friendliness, as extensive expert knowledge is essentially required. To address these challenges, we propose LD-Scene, a novel framework that integrates Large Language Models (LLMs) with Latent Diffusion Models (LDMs) for user-controllable adversarial scenario generation through natural language. Our approach comprises an LDM that captures realistic driving trajectory distributions and an LLM-based guidance module that translates user queries into adversarial loss functions, facilitating the generation of scenarios aligned with user queries. The guidance module integrates an LLM-based Chain-of-Thought (CoT) code generator and an LLM-based code debugger, enhancing the controllability and robustness in generating guidance functions. Extensive experiments conducted on the nuScenes dataset demonstrate that LD-Scene achieves state-of-the-art performance in generating realistic, diverse, and effective adversarial scenarios. Furthermore, our framework provides fine-grained control over adversarial behaviors, thereby facilitating more effective testing tailored to specific driving scenarios.


Safety-Critical Traffic Simulation with Guided Latent Diffusion Model

Peng, Mingxing, Yao, Ruoyu, Guo, Xusen, Xie, Yuting, Chen, Xianda, Ma, Jun

arXiv.org Artificial Intelligence

Safety-Critical Traffic Simulation with Guided Latent Diffusion Model 1 st Mingxing Peng The Hong Kong University of Science and T echnology (Guangzhou) Guangzhou, China mpeng060@connect.hkust-gz.edu.cn 2 nd Ruoyu Y ao The Hong Kong University of Science and T echnology (Guangzhou) Guangzhou, China ryao092@connect.hkust-gz.edu.cn 3 rd Xusen Guo The Hong Kong University of Science and T echnology (Guangzhou) Guangzhou, China xguo796@connect.hkust-gz.edu.cn 4 th Y uting Xie School of Computer Science and Engineering Sun Y at-sen University Guangzhou, China xieyt8@mail2.sysu.edu.cn 5 th Xianda Chen The Hong Kong University of Science and T echnology (Guangzhou) Guangzhou, China xchen595@connect.hkust-gz.edu.cn Abstract --Safety-critical traffic simulation plays a crucial role in evaluating autonomous driving systems under rare and challenging scenarios. However, existing approaches often generate unrealistic scenarios due to insufficient consideration of physical plausibility and suffer from low generation efficiency. T o address these limitations, we propose a guided latent diffusion model (LDM) capable of generating physically realistic and adversarial safety-critical traffic scenarios. Specifically, our model employs a graph-based variational autoencoder (V AE) to learn a compact latent space that captures complex multi-agent interactions while improving computational efficiency. Within this latent space, the diffusion model performs the denoising process to produce realistic trajectories. T o enable controllable and adversarial scenario generation, we introduce novel guidance objectives that drive the diffusion process toward producing adversarial and behaviorally realistic driving behaviors.


A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving

Lou, Yang, Zhu, Yi, Song, Qun, Tan, Rui, Qiao, Chunming, Lee, Wei-Bin, Wang, Jianping

arXiv.org Artificial Intelligence

Trajectory prediction forecasts nearby agents' moves based on their historical trajectories. Accurate trajectory prediction is crucial for autonomous vehicles. Existing attacks compromise the prediction model of a victim AV by directly manipulating the historical trajectory of an attacker AV, which has limited real-world applicability. This paper, for the first time, explores an indirect attack approach that induces prediction errors via attacks against the perception module of a victim AV. Although it has been shown that physically realizable attacks against LiDAR-based perception are possible by placing a few objects at strategic locations, it is still an open challenge to find an object location from the vast search space in order to launch effective attacks against prediction under varying victim AV velocities. Through analysis, we observe that a prediction model is prone to an attack focusing on a single point in the scene. Consequently, we propose a novel two-stage attack framework to realize the single-point attack. The first stage of prediction-side attack efficiently identifies, guided by the distribution of detection results under object-based attacks against perception, the state perturbations for the prediction model that are effective and velocity-insensitive. In the second stage of location matching, we match the feasible object locations with the found state perturbations. Our evaluation using a public autonomous driving dataset shows that our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV. The effectiveness of our attack is also demonstrated on a real testbed car. To the best of our knowledge, this study is the first security analysis spanning from LiDAR-based perception to prediction in autonomous driving, leading to a realistic attack on prediction. To counteract the proposed attack, potential defenses are discussed.


Adversarial Evaluation of Autonomous Vehicles in Lane-Change Scenarios

Chen, Baiming, Li, Liang

arXiv.org Machine Learning

Autonomous vehicles must be comprehensively evaluated before deployed in cities and highways. Current evaluation procedures lack the abilities of weakness-aiming and evolving, thus they could hardly generate adversarial environments for autonomous vehicles, leading to insufficient challenges. To overcome the shortage of static evaluation methods, this paper proposes a novel method to generate adversarial environments with deep reinforcement learning, and to cluster them with a nonparametric Bayesian method. As a representative task of autonomous driving, lane-change is used to demonstrate the superiority of the proposed method. First, two lane-change models are separately developed by a rule-based method and a learning-based method, waiting for evaluation and comparison. Next, adversarial environments are generated by training surrounding interactive vehicles with deep reinforcement learning for local optimal ensembles. Then, a nonparametric Bayesian approach is utilized to cluster the adversarial policies of the interactive vehicles. Finally, the adversarial environment patterns are illustrated and the performances of two lane-change models are evaluated and compared. The simulation results indicate that both models perform significantly worse in adversarial environments than in naturalistic environments, with plenty of weaknesses successfully extracted in a few tests.