Goto

Collaborating Authors

 Flannagan, Carol


Evaluation of adaptive sampling methods in scenario generation for virtual safety impact assessment of pre-crash safety systems

arXiv.org Artificial Intelligence

Virtual safety assessment plays a vital role in evaluating the safety impact of pre-crash safety systems such as advanced driver assistance systems (ADAS) and automated driving systems (ADS). However, as the number of parameters in simulation-based scenario generation increases, the number of crash scenarios to simulate grows exponentially, making complete enumeration computationally infeasible. Efficient sampling methods, such as importance sampling and active sampling, have been proposed to address this challenge. However, a comprehensive evaluation of how domain knowledge, stratification, and batch sampling affect their efficiency remains limited. This study evaluates the performance of importance sampling and active sampling in scenario generation, incorporating two domain-knowledge-driven features: adaptive sample space reduction (ASSR) and stratification. Additionally, we assess the effects of a third feature, batch sampling, on computational efficiency in terms of both CPU and wall-clock time. Based on our findings, we provide practical recommendations for applying ASSR, stratification, and batch sampling to optimize sampling performance. Our results demonstrate that ASSR substantially improves sampling efficiency for both importance sampling and active sampling. When integrated into active sampling, ASSR reduces the root mean squared estimation error (RMSE) of the estimates by up to 90\%. Stratification further improves sampling performance for both methods, regardless of ASSR implementation. When ASSR and/or stratification are applied, importance sampling performs on par with active sampling, whereas when neither feature is used, active sampling is more efficient. Larger batch sizes reduce wall-clock time but increase the number of simulations required to achieve the same estimation accuracy.


Model-based generation of representative rear-end crash scenarios across the full severity range using pre-crash data

arXiv.org Artificial Intelligence

Generating representative rear-end crash scenarios is crucial for safety assessments of Advanced Driver Assistance Systems (ADAS) and Automated Driving systems (ADS). However, existing methods for scenario generation face challenges such as limited and biased in-depth crash data and difficulties in validation. This study sought to overcome these challenges by combining naturalistic driving data and pre-crash kinematics data from rear-end crashes. The combined dataset was weighted to create a representative dataset of rear-end crash characteristics across the full severity range in the United States. Multivariate distribution models were built for the combined dataset, and a driver behavior model for the following vehicle was created by combining two existing models. Simulations were conducted to generate a set of synthetic rear-end crash scenarios, which were then weighted to create a representative synthetic rear-end crash dataset. Finally, the synthetic dataset was validated by comparing the distributions of parameters and the outcomes (Delta-v, the total change in vehicle velocity over the duration of the crash event) of the generated crashes with those in the original combined dataset. The synthetic crash dataset can be used for the safety assessments of ADAS and ADS and as a benchmark when evaluating the representativeness of scenarios generated through other methods.


Evaluation of automated driving system safety metrics with logged vehicle trajectory data

arXiv.org Artificial Intelligence

Real-time safety metrics are important for the automated driving system (ADS) to assess the risk of driving situations and to assist the decision-making. Although a number of real-time safety metrics have been proposed in the literature, systematic performance evaluation of these safety metrics has been lacking. As different behavioral assumptions are adopted in different safety metrics, it is difficult to compare the safety metrics and evaluate their performance. To overcome this challenge, in this study, we propose an evaluation framework utilizing logged vehicle trajectory data, in that vehicle trajectories for both subject vehicle (SV) and background vehicles (BVs) are obtained and the prediction errors caused by behavioral assumptions can be eliminated. Specifically, we examine whether the SV is in a collision unavoidable situation at each moment, given all near-future trajectories of BVs. In this way, we level the ground for a fair comparison of different safety metrics, as a good safety metric should always alarm in advance to the collision unavoidable moment. When trajectory data from a large number of trips are available, we can systematically evaluate and compare different metrics' statistical performance. In the case study, three representative real-time safety metrics, including the time-to-collision (TTC), the PEGASUS Criticality Metric (PCM), and the Model Predictive Instantaneous Safety Metric (MPrISM), are evaluated using a large-scale simulated trajectory dataset. The proposed evaluation framework is important for researchers, practitioners, and regulators to characterize different metrics, and to select appropriate metrics for different applications. Moreover, by conducting failure analysis on moments when a safety metric failed, we can identify its potential weaknesses which are valuable for its potential refinements and improvements.


Modeling Lead-vehicle Kinematics For Rear-end Crash Scenario Generation

arXiv.org Artificial Intelligence

The use of virtual safety assessment as the primary method for evaluating vehicle safety technologies has emphasized the importance of crash scenario generation. One of the most common crash types is the rear-end crash, which involves a lead vehicle and a following vehicle. Most studies have focused on the following vehicle, assuming that the lead vehicle maintains a constant acceleration/deceleration before the crash. However, there is no evidence for this premise in the literature. This study aims to address this knowledge gap by thoroughly analyzing and modeling the lead vehicle's behavior as a first step in generating rear-end crash scenarios. Accordingly, the study employed a piecewise linear model to parameterize the speed profiles of lead vehicles, utilizing two rear-end pre-crash/near-crash datasets. These datasets were merged and categorized into multiple sub-datasets; for each one, a multivariate distribution was constructed to represent the corresponding parameters. Subsequently, a synthetic dataset was generated using these distribution models and validated by comparison with the original combined dataset. The results highlight diverse lead-vehicle speed patterns, indicating that a more accurate model, such as the proposed piecewise linear model, is required instead of the conventional constant acceleration/deceleration model. Crashes generated with the proposed models accurately match crash data across the full severity range, surpassing existing lead-vehicle kinematics models in both severity range and accuracy. By providing more realistic speed profiles for the lead vehicle, the model developed in the study contributes to creating realistic rear-end crash scenarios and reconstructing real-life crashes.


Driver Behavior Extraction from Videos in Naturalistic Driving Datasets with 3D ConvNets

arXiv.org Artificial Intelligence

Naturalistic driving data (NDD) is an important source of information to understand crash causation and human factors and to further develop crash avoidance countermeasures. Videos recorded while driving are often included in such datasets. While there is often a large amount of video data in NDD, only a small portion of them can be annotated by human coders and used for research, which underuses all video data. In this paper, we explored a computer vision method to automatically extract the information we need from videos. More specifically, we developed a 3D ConvNet algorithm to automatically extract cell-phone-related behaviors from videos. The experiments show that our method can extract chunks from videos, most of which (~79%) contain the automatically labeled cell phone behaviors. In conjunction with human review of the extracted chunks, this approach can find cell-phone-related driver behaviors much more efficiently than simply viewing video.