Hua, Min
Multi-Agent Reinforcement Learning for Connected and Automated Vehicles Control: Recent Advancements and Future Prospects
Hua, Min, Chen, Dong, Qi, Xinda, Jiang, Kun, Liu, Zemin Eitan, Zhou, Quan, Xu, Hongming
Connected and automated vehicles (CAVs) have emerged as a potential solution to the future challenges of developing safe, efficient, and eco-friendly transportation systems. However, CAV control presents significant challenges, given the complexity of interconnectivity and coordination required among the vehicles. To address this, multi-agent reinforcement learning (MARL), with its notable advancements in addressing complex problems in autonomous driving, robotics, and human-vehicle interaction, has emerged as a promising tool for enhancing the capabilities of CAVs. However, there is a notable absence of current reviews on the state-of-the-art MARL algorithms in the context of CAVs. Therefore, this paper delivers a comprehensive review of the application of MARL techniques within the field of CAV control. The paper begins by introducing MARL, followed by a detailed explanation of its unique advantages in addressing complex mobility and traffic scenarios that involve multiple agents. It then presents a comprehensive survey of MARL applications on the extent of control dimensions for CAVs, covering critical and typical scenarios such as platooning control, lane-changing, and unsignalized intersections. In addition, the paper provides a comprehensive review of the prominent simulation platforms used to create reliable environments for training in MARL. Lastly, the paper examines the current challenges associated with deploying MARL within CAV control and outlines potential solutions that can effectively overcome these issues. Through this review, the study highlights the tremendous potential of MARL to enhance the performance and collaboration of CAV control in terms of safety, travel efficiency, and economy.
Recent Progress in Energy Management of Connected Hybrid Electric Vehicles Using Reinforcement Learning
Hua, Min, Shuai, Bin, Zhou, Quan, Wang, Jinhai, He, Yinglong, Xu, Hongming
This surge in energy demand not only places strain on existing resources but also raises critical concerns regarding environmental sustainability, largely due to the predominant utilization of fossil fuels [1]. In light of these complex challenges, the electrification of transportation has emerged as a compelling avenue for resolution [1 3]. Consequently, automotive manufacturers are progressively pivoting away from conventional fossil fuel-powered vehicles, embracing innovative energy alternatives such as battery electric vehicles (BEVs), hybrid electric vehicles (HEVs), and fuel cell electric vehicles (FCEVs) [4 6]. Since such electric vehicles (EVs) stand out for their ability to enhance fuel economy, reduce emissions, and extend mileage range while navigating urban and environmental restrictions, however, the main limitations include a limited range compared to HEVs. HEVs allow them to offer the benefits of electrification without the range and charging constraints of BEVs. And FCEVs boast a longer range and faster refueling times compared to BEVs but are limited by the current scarcity of hydrogen refueling infrastructure. However, in response to these challenges, the effective energy management system (EMS) has emerged as a pivotal solution for optimizing energy usage and enhancing efficiency across various sectors.
Energy Management of Multi-mode Plug-in Hybrid Electric Vehicle using Multi-agent Deep Reinforcement Learning
Hua, Min, Zhang, Cetengfei, Zhang, Fanggang, Li, Zhi, Yu, Xiaoli, Xu, Hongming, Zhou, Quan
The recently emerging multi-mode plug-in hybrid electric vehicle (PHEV) technology is one of the pathways making contributions to decarbonization, and its energy management requires multiple-input and multipleoutput (MIMO) control. At the present, the existing methods usually decouple the MIMO control into singleoutput (MISO) control and can only achieve its local optimal performance. To optimize the multi-mode vehicle globally, this paper studies a MIMO control method for energy management of the multi-mode PHEV based on multi-agent deep reinforcement learning (MADRL). By introducing a relevance ratio, a hand-shaking strategy is proposed to enable two learning agents to work collaboratively under the MADRL framework using the deep deterministic policy gradient (DDPG) algorithm. Unified settings for the DDPG agents are obtained through a sensitivity analysis of the influencing factors to the learning performance. The optimal working mode for the hand-shaking strategy is attained through a parametric study on the relevance ratio. The advantage of the proposed energy management method is demonstrated on a software-in-the-loop testing platform. The result of the study indicates that the learning rate of the DDPG agents is the greatest influencing factor for learning performance. Using the unified DDPG settings and a relevance ratio of 0.2, the proposed MADRL system can save up to 4% energy compared to the single-agent learning system and up to 23.54% energy compared to the conventional rule-based system.
Study on the Impacts of Hazardous Behaviors on Autonomous Vehicle Collision Rates Based on Humanoid Scenario Generation in CARLA
Mo, Longfei, Hua, Min, Sun, Hongyu, Xu, Hongming, Shuai, Bin, Zhou, Quan
Testing of function safety and Safety Of The Intended Functionality (SOTIF) is important for autonomous vehicles (AVs). It is hard to test the AV's hazard response in the real world because it would involve hazards to passengers and other road users. This paper studied on virtual testing of AV on the CARLA platform and proposed a Humanoid Scenario Generation (HSG) scheme to investigate the impacts of hazardous behaviors on AV collision rates. The HSG scheme breakthrough the current limitation on the rarity and reproducibility of real scenes. By accurately capturing five prominent human driver behaviors that directly contribute to vehicle collisions in the real world, the methodology significantly enhances the realism and diversity of the simulation, as evidenced by collision rate statistics across various traffic scenarios. Thus, the modular framework allows for customization, and its seamless integration within the CARLA platform ensures compatibility with existing tools. Ultimately, the comparison results demonstrate that all vehicles that exhibited hazardous behaviors followed the predefined random speed distribution and the effectiveness of the HSG was validated by the distinct characteristics displayed by these behaviors.
Coordinated Control of Path Tracking and Yaw Stability for Distributed Drive Electric Vehicle Based on AMPC and DYC
Wu, Dongmei, Guan, Yuying, Xia, Xin, Du, Changqing, Yan, Fuwu, Li, Yang, Hua, Min, Liu, Wei
Maintaining both path-tracking accuracy and yaw stability of distributed drive electric vehicles (DDEVs) under various driving conditions presents a significant challenge in the field of vehicle control. To address this limitation, a coordinated control strategy that integrates adaptive model predictive control (AMPC) path-tracking control and direct yaw moment control (DYC) is proposed for DDEVs. The proposed strategy, inspired by a hierarchical framework, is coordinated by the upper layer of path-tracking control and the lower layer of direct yaw moment control. Based on the linear time-varying model predictive control (LTV MPC) algorithm, the effects of prediction horizon and weight coefficients on the path-tracking accuracy and yaw stability of the vehicle are compared and analyzed first. According to the aforementioned analysis, an AMPC path-tracking controller with variable prediction horizon and weight coefficients is designed considering the vehicle speed's variation in the upper layer. The lower layer involves DYC based on the linear quadratic regulator (LQR) technique. Specifically, the intervention rule of DYC is determined by the threshold of the yaw rate error and the phase diagram of the sideslip angle. Extensive simulation experiments are conducted to evaluate the proposed coordinated control strategy under different driving conditions. The results show that, under variable speed and low adhesion conditions, the vehicle's yaw stability and path-tracking accuracy have been improved by 21.58\% and 14.43\%, respectively, compared to AMPC. Similarly, under high speed and low adhesion conditions, the vehicle's yaw stability and path-tracking accuracy have been improved by 44.30\% and 14.25\%, respectively, compared to the coordination of LTV MPC and DYC. The results indicate that the proposed adaptive path-tracking controller is effective across different speeds.
Multi-level decision framework collision avoidance algorithm in emergency scenarios
Chen, Guoying, Wang, Xinyu, Hua, Min, Liu, Wei
With the rapid development of autonomous driving, the attention of academia has increasingly focused on the development of anti-collision systems in emergency scenarios, which have a crucial impact on driving safety. While numerous anti-collision strategies have emerged in recent years, most of them only consider steering or braking. The dynamic and complex nature of the driving environment presents a challenge to developing robust collision avoidance algorithms in emergency scenarios. To address the complex, dynamic obstacle scene and improve lateral maneuverability, this paper establishes a multi-level decision-making obstacle avoidance framework that employs the safe distance model and integrates emergency steering and emergency braking to complete the obstacle avoidance process. This approach helps avoid the high-risk situation of vehicle instability that can result from the separation of steering and braking actions. In the emergency steering algorithm, we define the collision hazard moment and propose a multi-constraint dynamic collision avoidance planning method that considers the driving area. Simulation results demonstrate that the decision-making collision avoidance logic can be applied to dynamic collision avoidance scenarios in complex traffic situations, effectively completing the obstacle avoidance task in emergency scenarios and improving the safety of autonomous driving.
A Systematic Survey of Control Techniques and Applications in Connected and Automated Vehicles
Liu, Wei, Hua, Min, Deng, Zhiyun, Meng, Zonglin, Huang, Yanjun, Hu, Chuan, Song, Shunhui, Gao, Letian, Liu, Changsheng, Shuai, Bin, Khajepour, Amir, Xiong, Lu, Xia, Xin
Vehicle control is one of the most critical challenges in autonomous vehicles (AVs) and connected and automated vehicles (CAVs), and it is paramount in vehicle safety, passenger comfort, transportation efficiency, and energy saving. This survey attempts to provide a comprehensive and thorough overview of the current state of vehicle control technology, focusing on the evolution from vehicle state estimation and trajectory tracking control in AVs at the microscopic level to collaborative control in CAVs at the macroscopic level. First, this review starts with vehicle key state estimation, specifically vehicle sideslip angle, which is the most pivotal state for vehicle trajectory control, to discuss representative approaches. Then, we present symbolic vehicle trajectory tracking control approaches for AVs. On top of that, we further review the collaborative control frameworks for CAVs and corresponding applications. Finally, this survey concludes with a discussion of future research directions and the challenges. This survey aims to provide a contextualized and in-depth look at state of the art in vehicle control for AVs and CAVs, identifying critical areas of focus and pointing out the potential areas for further exploration.
Optimal Energy Management of Plug-in Hybrid Vehicles Through Exploration-to-Exploitation Ratio Control in Ensemble Reinforcement Learning
Shuai, Bin, Hua, Min, Li, Yanfei, Shuai, Shijin, Xu, Hongming, Zhou, Quan
Developing intelligent energy management systems with high adaptability and superiority is necessary and significant for Hybrid Electric Vehicles (HEVs). This paper proposed an ensemble learning-based scheme based on a learning automata module (LAM) to enhance vehicle energy efficiency. Two parallel base learners following two exploration-to-exploitation ratios (E2E) methods are used to generate an optimal solution, and the final action is jointly determined by the LAM using three ensemble methods. 'Reciprocal function-based decay' (RBD) and 'Step-based decay' (SBD) are proposed respectively to generate E2E ratio trajectories based on conventional Exponential decay (EXD) functions of reinforcement learning. Furthermore, considering the different performances of three decay functions, an optimal combination with the RBD, SBD, and EXD is employed to determine the ultimate action. Experiments are carried out in software-in-loop (SiL) and hardware-in-the-loop (HiL) to validate the potential performance of energy-saving under four predefined cycles. The SiL test demonstrates that the ensemble learning system with an optimal combination can achieve 1.09$\%$ higher vehicle energy efficiency than a single Q-learning strategy with the EXD function. In the HiL test, the ensemble learning system with an optimal combination can save more than 1.04$\%$ in the predefined real-world driving condition than the single Q-learning scheme based on the EXD function.
Energy Management of Multi-mode Hybrid Electric Vehicles based on Hand-shaking Multi-agent Learning
Hua, Min, Li, Zhi, Zhou, Quan
The future transportation system will be a multi-agent network where connected AI agents can work together to address the grand challenges in our age, e.g., mitigation of real-world driving energy consumption. Distinguished from the existing research on vehicle energy management, which decoupled multiple inputs and multiple outputs (MIMO) control into single-output(MISO) control, this paper studied a multi-agent deep reinforcement learning (MADRL) framework to deal with multiple control outputs simultaneously. A new hand-shaking strategy is proposed for the DRL agents by introducing an independence ratio, and a parametric study is conducted to obtain the best setting for the MADRL framework. The study suggested that the MADRL with an independence ratio of 0.2 is the best, and more than 2.4% of energy can be saved over the conventional DRL framework.