Not enough data to create a plot.
Try a different view from the menu above.
Huang, Yanliang
Predictive Traffic Rule Compliance using Reinforcement Learning
Huang, Yanliang, Mair, Sebastian, Zeng, Zhuoqi, Althoff, Matthias
--Autonomous vehicle path planning has reached a stage where safety and regulatory compliance are crucial. This paper presents an approach that integrates a motion planner with a deep reinforcement learning model to predict potential traffic rule violations. Our main innovation is replacing the standard actor network in an actor-critic method with a motion planning module, which ensures both stable and interpretable trajectory generation. In this setup, we use traffic rule robustness as the reward to train a reinforcement learning agent's critic, and the output of the critic is directly used as the cost function of the motion planner, which guides the choices of the trajectory. We incorporate some key interstate rules from the German Road Traffic Regulation into a rule book and use a graph-based state representation to handle complex traffic information. Experiments on an open German highway dataset show that the model can predict and prevent traffic rule violations beyond the planning horizon, increasing safety and rule compliance in challenging traffic scenarios. HE field of autonomous driving has advanced substantially over the past five years. Although perception and prediction modules have become more reliable, planning systems still face challenges, particularly regarding safety assurance and operational robustness. Furthermore, traffic rule compliance remains a fundamental prerequisite for autonomous vehicles, both to protect road users and to satisfy legal certification standards. Recent research has effectively applied temporal logic to formalize traffic rules, enabling automated online monitoring systems [1]-[3] to continuously monitor the compliance of traffic rules. These approaches use the concept of rule robustness--a quantitative metric indicating how thoroughly specific traffic rules are satisfied or violated.
Path Planning based on 2D Object Bounding-box
Huang, Yanliang, Zhou, Liguo, Liu, Chang, Knoll, Alois
The implementation of Autonomous Driving (AD) technologies within urban environments presents significant challenges. These challenges necessitate the development of advanced perception systems and motion planning algorithms capable of managing situations of considerable complexity. Although the end-to-end AD method utilizing LiDAR sensors has achieved significant success in this scenario, we argue that its drawbacks may hinder its practical application. Instead, we propose the vision-centric AD as a promising alternative offering a streamlined model without compromising performance. In this study, we present a path planning method that utilizes 2D bounding boxes of objects, developed through imitation learning in urban driving scenarios. This is achieved by integrating high-definition (HD) map data with images captured by surrounding cameras. Subsequent perception tasks involve bounding-box detection and tracking, while the planning phase employs both local embeddings via Graph Neural Network (GNN) and global embeddings via Transformer for temporal-spatial feature aggregation, ultimately producing optimal path planning information. We evaluated our model on the nuPlan planning task and observed that it performs competitively in comparison to existing vision-centric methods.
YOLO-BEV: Generating Bird's-Eye View in the Same Way as 2D Object Detection
Liu, Chang, Zhou, Liguo, Huang, Yanliang, Knoll, Alois
Vehicle perception systems strive to achieve comprehensive and rapid visual interpretation of their surroundings for improved safety and navigation. We introduce YOLO-BEV, an efficient framework that harnesses a unique surrounding cameras setup to generate a 2D bird's-eye view of the vehicular environment. By strategically positioning eight cameras, each at a 45-degree interval, our system captures and integrates imagery into a coherent 3x3 grid format, leaving the center blank, providing an enriched spatial representation that facilitates efficient processing. In our approach, we employ YOLO's detection mechanism, favoring its inherent advantages of swift response and compact model structure. Instead of leveraging the conventional YOLO detection head, we augment it with a custom-designed detection head, translating the panoramically captured data into a unified bird's-eye view map of ego car. Preliminary results validate the feasibility of YOLO-BEV in real-time vehicular perception tasks. With its streamlined architecture and potential for rapid deployment due to minimized parameters, YOLO-BEV poses as a promising tool that may reshape future perspectives in autonomous driving systems.