Xu, Hongming
SYNERGAI: Perception Alignment for Human-Robot Collaboration
Chen, Yixin, Zhang, Guoxi, Zhang, Yaowei, Xu, Hongming, Zhi, Peiyuan, Li, Qing, Huang, Siyuan
Recently, large language models (LLMs) have shown strong potential in facilitating human-robotic interaction and collaboration. However, existing LLM-based systems often overlook the misalignment between human and robot perceptions, which hinders their effective communication and real-world robot deployment. To address this issue, we introduce SYNERGAI, a unified system designed to achieve both perceptual alignment and human-robot collaboration. At its core, SYNERGAI employs 3D Scene Graph (3DSG) as its explicit and innate representation. This enables the system to leverage LLM to break down complex tasks and allocate appropriate tools in intermediate steps to extract relevant information from the 3DSG, modify its structure, or generate responses. Importantly, SYNERGAI incorporates an automatic mechanism that enables perceptual misalignment correction with users by updating its 3DSG with online interaction. SYNERGAI achieves comparable performance with the data-driven models in ScanQA in a zero-shot manner. Through comprehensive experiments across 10 real-world scenes, SYNERGAI demonstrates its effectiveness in establishing common ground with humans, realizing a success rate of 61.9% in alignment tasks. It also significantly improves the success rate from 3.7% to 45.68% on novel tasks by transferring the knowledge acquired during alignment.
End-to-End Neuro-Symbolic Reinforcement Learning with Textual Explanations
Luo, Lirui, Zhang, Guoxi, Xu, Hongming, Yang, Yaodong, Fang, Cong, Li, Qing
Neuro-symbolic reinforcement learning (NS-RL) has emerged as a promising paradigm for explainable decision-making, characterized by the interpretability of symbolic policies. NS-RL entails structured state representations for tasks with visual observations, but previous methods cannot refine the structured states with rewards due to a lack of efficiency. Accessibility also remains an issue, as extensive domain knowledge is required to interpret symbolic policies. In this paper, we present a neuro-symbolic framework for jointly learning structured states and symbolic policies, whose key idea is to distill the vision foundation model into an efficient perception module and refine it during policy learning. Moreover, we design a pipeline to prompt GPT-4 to generate textual explanations for the learned policies and decisions, significantly reducing users' cognitive load to understand the symbolic policies. We verify the efficacy of our approach on nine Atari tasks and present GPT-generated explanations for policies and decisions.
Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning
Li, Yaxin, Xu, Qi, Shen, Jiangrong, Xu, Hongming, Chen, Long, Pan, Gang
The emergence of deep and large-scale spiking neural networks (SNNs) exhibiting high performance across diverse complex datasets has led to a need for compressing network models due to the presence of a significant number of redundant structural units, aiming to more effectively leverage their low-power consumption and biological interpretability advantages. Currently, most model compression techniques for SNNs are based on unstructured pruning of individual connections, which requires specific hardware support. Hence, we propose a structured pruning approach based on the activity levels of convolutional kernels named Spiking Channel Activity-based (SCA) network pruning framework. Inspired by synaptic plasticity mechanisms, our method dynamically adjusts the network's structure by pruning and regenerating convolutional kernels during training, enhancing the model's adaptation to the current target task. While maintaining model performance, this approach refines the network architecture, ultimately reducing computational load and accelerating the inference process. This indicates that structured dynamic sparse learning methods can better facilitate the application of deep SNNs in low-power and high-efficiency scenarios.
Multi-Agent Reinforcement Learning for Connected and Automated Vehicles Control: Recent Advancements and Future Prospects
Hua, Min, Chen, Dong, Qi, Xinda, Jiang, Kun, Liu, Zemin Eitan, Zhou, Quan, Xu, Hongming
Connected and automated vehicles (CAVs) have emerged as a potential solution to the future challenges of developing safe, efficient, and eco-friendly transportation systems. However, CAV control presents significant challenges, given the complexity of interconnectivity and coordination required among the vehicles. To address this, multi-agent reinforcement learning (MARL), with its notable advancements in addressing complex problems in autonomous driving, robotics, and human-vehicle interaction, has emerged as a promising tool for enhancing the capabilities of CAVs. However, there is a notable absence of current reviews on the state-of-the-art MARL algorithms in the context of CAVs. Therefore, this paper delivers a comprehensive review of the application of MARL techniques within the field of CAV control. The paper begins by introducing MARL, followed by a detailed explanation of its unique advantages in addressing complex mobility and traffic scenarios that involve multiple agents. It then presents a comprehensive survey of MARL applications on the extent of control dimensions for CAVs, covering critical and typical scenarios such as platooning control, lane-changing, and unsignalized intersections. In addition, the paper provides a comprehensive review of the prominent simulation platforms used to create reliable environments for training in MARL. Lastly, the paper examines the current challenges associated with deploying MARL within CAV control and outlines potential solutions that can effectively overcome these issues. Through this review, the study highlights the tremendous potential of MARL to enhance the performance and collaboration of CAV control in terms of safety, travel efficiency, and economy.
Recent Progress in Energy Management of Connected Hybrid Electric Vehicles Using Reinforcement Learning
Hua, Min, Shuai, Bin, Zhou, Quan, Wang, Jinhai, He, Yinglong, Xu, Hongming
This surge in energy demand not only places strain on existing resources but also raises critical concerns regarding environmental sustainability, largely due to the predominant utilization of fossil fuels [1]. In light of these complex challenges, the electrification of transportation has emerged as a compelling avenue for resolution [1 3]. Consequently, automotive manufacturers are progressively pivoting away from conventional fossil fuel-powered vehicles, embracing innovative energy alternatives such as battery electric vehicles (BEVs), hybrid electric vehicles (HEVs), and fuel cell electric vehicles (FCEVs) [4 6]. Since such electric vehicles (EVs) stand out for their ability to enhance fuel economy, reduce emissions, and extend mileage range while navigating urban and environmental restrictions, however, the main limitations include a limited range compared to HEVs. HEVs allow them to offer the benefits of electrification without the range and charging constraints of BEVs. And FCEVs boast a longer range and faster refueling times compared to BEVs but are limited by the current scarcity of hydrogen refueling infrastructure. However, in response to these challenges, the effective energy management system (EMS) has emerged as a pivotal solution for optimizing energy usage and enhancing efficiency across various sectors.
Energy Management of Multi-mode Plug-in Hybrid Electric Vehicle using Multi-agent Deep Reinforcement Learning
Hua, Min, Zhang, Cetengfei, Zhang, Fanggang, Li, Zhi, Yu, Xiaoli, Xu, Hongming, Zhou, Quan
The recently emerging multi-mode plug-in hybrid electric vehicle (PHEV) technology is one of the pathways making contributions to decarbonization, and its energy management requires multiple-input and multipleoutput (MIMO) control. At the present, the existing methods usually decouple the MIMO control into singleoutput (MISO) control and can only achieve its local optimal performance. To optimize the multi-mode vehicle globally, this paper studies a MIMO control method for energy management of the multi-mode PHEV based on multi-agent deep reinforcement learning (MADRL). By introducing a relevance ratio, a hand-shaking strategy is proposed to enable two learning agents to work collaboratively under the MADRL framework using the deep deterministic policy gradient (DDPG) algorithm. Unified settings for the DDPG agents are obtained through a sensitivity analysis of the influencing factors to the learning performance. The optimal working mode for the hand-shaking strategy is attained through a parametric study on the relevance ratio. The advantage of the proposed energy management method is demonstrated on a software-in-the-loop testing platform. The result of the study indicates that the learning rate of the DDPG agents is the greatest influencing factor for learning performance. Using the unified DDPG settings and a relevance ratio of 0.2, the proposed MADRL system can save up to 4% energy compared to the single-agent learning system and up to 23.54% energy compared to the conventional rule-based system.
Study on the Impacts of Hazardous Behaviors on Autonomous Vehicle Collision Rates Based on Humanoid Scenario Generation in CARLA
Mo, Longfei, Hua, Min, Sun, Hongyu, Xu, Hongming, Shuai, Bin, Zhou, Quan
Testing of function safety and Safety Of The Intended Functionality (SOTIF) is important for autonomous vehicles (AVs). It is hard to test the AV's hazard response in the real world because it would involve hazards to passengers and other road users. This paper studied on virtual testing of AV on the CARLA platform and proposed a Humanoid Scenario Generation (HSG) scheme to investigate the impacts of hazardous behaviors on AV collision rates. The HSG scheme breakthrough the current limitation on the rarity and reproducibility of real scenes. By accurately capturing five prominent human driver behaviors that directly contribute to vehicle collisions in the real world, the methodology significantly enhances the realism and diversity of the simulation, as evidenced by collision rate statistics across various traffic scenarios. Thus, the modular framework allows for customization, and its seamless integration within the CARLA platform ensures compatibility with existing tools. Ultimately, the comparison results demonstrate that all vehicles that exhibited hazardous behaviors followed the predefined random speed distribution and the effectiveness of the HSG was validated by the distinct characteristics displayed by these behaviors.
Optimal Energy Management of Plug-in Hybrid Vehicles Through Exploration-to-Exploitation Ratio Control in Ensemble Reinforcement Learning
Shuai, Bin, Hua, Min, Li, Yanfei, Shuai, Shijin, Xu, Hongming, Zhou, Quan
Developing intelligent energy management systems with high adaptability and superiority is necessary and significant for Hybrid Electric Vehicles (HEVs). This paper proposed an ensemble learning-based scheme based on a learning automata module (LAM) to enhance vehicle energy efficiency. Two parallel base learners following two exploration-to-exploitation ratios (E2E) methods are used to generate an optimal solution, and the final action is jointly determined by the LAM using three ensemble methods. 'Reciprocal function-based decay' (RBD) and 'Step-based decay' (SBD) are proposed respectively to generate E2E ratio trajectories based on conventional Exponential decay (EXD) functions of reinforcement learning. Furthermore, considering the different performances of three decay functions, an optimal combination with the RBD, SBD, and EXD is employed to determine the ultimate action. Experiments are carried out in software-in-loop (SiL) and hardware-in-the-loop (HiL) to validate the potential performance of energy-saving under four predefined cycles. The SiL test demonstrates that the ensemble learning system with an optimal combination can achieve 1.09$\%$ higher vehicle energy efficiency than a single Q-learning strategy with the EXD function. In the HiL test, the ensemble learning system with an optimal combination can save more than 1.04$\%$ in the predefined real-world driving condition than the single Q-learning scheme based on the EXD function.