Extensive Exploration in Complex Traffic Scenarios using Hierarchical Reinforcement Learning
Zhang, Zhihao, Yurtsever, Ekim, Redmill, Keith A.
–arXiv.org Artificial Intelligence
Developing an automated driving system capable of navigating complex traffic environments remains a formidable challenge. Unlike rule-based or supervised learning-based methods, Deep Reinforcement Learning (DRL) based controllers eliminate the need for domain-specific knowledge and datasets, thus providing adaptability to various scenarios. Nonetheless, a common limitation of existing studies on DRL-based controllers is their focus on driving scenarios with simple traffic patterns, which hinders their capability to effectively handle complex driving environments with delayed, long-term rewards, thus compromising the generalizability of their findings. In response to these limitations, our research introduces a pioneering hierarchical framework that efficiently decomposes intricate decision-making problems into manageable and interpretable subtasks. We adopt a two step training process that trains the high-level controller and low-level controller separately. The high-level controller exhibits an enhanced exploration potential with long-term delayed rewards, and the low-level controller provides longitudinal and lateral control ability using short-term instantaneous rewards. Through simulation experiments, we demonstrate the superiority of our hierarchical controller in managing complex highway driving situations.
arXiv.org Artificial Intelligence
Jan-24-2025
- Country:
- Asia
- Japan (0.04)
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.04)
- Europe > Middle East
- Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- North America > United States
- Ohio > Franklin County > Columbus (0.04)
- Asia
- Genre:
- Research Report (0.82)
- Industry:
- Automobiles & Trucks (1.00)
- Transportation > Ground
- Road (1.00)
- Technology: