Goto

Collaborating Authors

 Qi, Xiao


Enhancing Social Decision-Making of Autonomous Vehicles: A Mixed-Strategy Game Approach With Interaction Orientation Identification

arXiv.org Artificial Intelligence

The integration of Autonomous Vehicles (AVs) into existing human-driven traffic systems poses considerable challenges, especially within environments where human and machine interactions are frequent and complex, such as at unsignalized intersections. Addressing these challenges, we introduce a novel framework predicated on dynamic and socially-aware decision-making game theory to augment the social decision-making prowess of AVs in mixed driving environments.This comprehensive framework is delineated into three primary modules: Social Tendency Recognition, Mixed-Strategy Game Modeling, and Expert Mode Learning. We introduce 'Interaction Orientation' as a metric to evaluate the social decision-making tendencies of various agents, incorporating both environmental factors and trajectory data. The mixed-strategy game model developed as part of this framework considers the evolution of future traffic scenarios and includes a utility function that balances safety, operational efficiency, and the unpredictability of environmental conditions. To adapt to real-world driving complexities, our framework utilizes dynamic optimization techniques for assimilating and learning from expert human driving strategies. These strategies are compiled into a comprehensive library, serving as a reference for future decision-making processes. Our approach is validated through extensive driving datasets, and the results demonstrate marked enhancements in decision timing, precision.


Teaching Autonomous Vehicles to Express Interaction Intent during Unprotected Left Turns: A Human-Driving-Prior-Based Trajectory Planning Approach

arXiv.org Artificial Intelligence

Incorporating Autonomous Vehicles (AVs) into existing transportation systems necessitates examining their coexistence with Human-driven Vehicles (HVs) in mixed traffic environments. Central to this coexistence is the AVs' ability to emulate human-like interaction intentions within traffic scenarios. We introduce a novel framework for planning unprotected left-turn trajectories for AVs, designed to mirror human driving behaviors and effectively communicate social intentions. This framework consists of three phases: trajectory generation, evaluation, and selection.In the trajectory generation phase, we utilize real human-driving trajectory data to establish constraints for a predicted trajectory space, creating candidate motion trajectories that reflect intent. The evaluation phase incorporates maximum entropy inverse reinforcement learning (ME-IRL) to gauge human trajectory preferences, considering aspects like traffic efficiency, driving comfort, and interactive safety. During the selection phase, a Boltzmann distribution-based approach is employed to assign rewards and probabilities to the candidate trajectories, promoting human-like decision-making. We validate our framework using an authentic trajectory dataset and conduct a comparative analysis with various baseline methods. Our results, derived from simulator tests and human-in-the-loop driving experiments, affirm our framework's superiority in mimicking human-like driving, expressing intent, and computational efficiency. For additional information of this research, please visit https://shorturl.at/jqu35.


Multi-Scale Feature Fusion using Parallel-Attention Block for COVID-19 Chest X-ray Diagnosis

arXiv.org Artificial Intelligence

Under the global COVID-19 crisis, accurate diagnosis of COVID-19 from Chest X-ray (CXR) images is critical. To reduce intra- and inter-observer variability, during the radiological assessment, computer-aided diagnostic tools have been utilized to supplement medical decision-making and subsequent disease management. Computational methods with high accuracy and robustness are required for rapid triaging of patients and aiding radiologists in the interpretation of the collected data. In this study, we propose a novel multi-feature fusion network using parallel attention blocks to fuse the original CXR images and local-phase feature-enhanced CXR images at multi-scales. We examine our model on various COVID-19 datasets acquired from different organizations to assess the generalization ability. Our experiments demonstrate that our method achieves state-of-art performance and has improved generalization capability, which is crucial for widespread deployment.