Joshi, Keyur
GAS: Generating Fast and Accurate Surrogate Models for Autonomous Vehicle Systems
Joshi, Keyur, Hsieh, Chiao, Mitra, Sayan, Misailovic, Sasa
Modern autonomous vehicle systems use complex perception and control components. These components can rapidly change during development of such systems, requiring constant re-testing. Unfortunately, high-fidelity simulations of these complex systems for evaluating vehicle safety are costly. The complexity also hinders the creation of less computationally intensive surrogate models. We present GAS, the first approach for creating surrogate models of complete (perception, control, and dynamics) autonomous vehicle systems containing complex perception and/or control components. GAS's two-stage approach first replaces complex perception components with a perception model. Then, GAS constructs a polynomial surrogate model of the complete vehicle system using Generalized Polynomial Chaos (GPC). We demonstrate the use of these surrogate models in two applications. First, we estimate the probability that the vehicle will enter an unsafe state over time. Second, we perform global sensitivity analysis of the vehicle system with respect to its state in a previous time step. GAS's approach also allows for reuse of the perception model when vehicle control and dynamics characteristics are altered during vehicle development, saving significant time. We consider five scenarios concerning crop management vehicles that must not crash into adjacent crops, self driving cars that must stay within their lane, and unmanned aircraft that must avoid collision. Each of the systems in these scenarios contain a complex perception or control component. Using GAS, we generate surrogate models for these systems, and evaluate the generated models in the applications described above. GAS's surrogate models provide an average speedup of $3.7\times$ for safe state probability estimation (minimum $2.1\times$) and $1.4\times$ for sensitivity analysis (minimum $1.3\times$), while still maintaining high accuracy.
Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions
Hsieh, Chiao, Joshi, Keyur, Misailovic, Sasa, Mitra, Sayan
Convolutional Neural Networks (CNN) for object detection, lane detection, and segmentation now sit at the head of most autonomy pipelines, and yet, their safety analysis remains an important challenge. Formal analysis of perception models is fundamentally difficult because their correctness is hard if not impossible to specify. We present a technique for inferring intelligible and safe abstractions for perception models from system-level safety requirements, data, and program analysis of the modules that are downstream from perception. The technique can help tradeoff safety, size, and precision, in creating abstractions and the subsequent verification. We apply the method to two significant case studies based on high-fidelity simulations (a) a vision-based lane keeping controller for an autonomous vehicle and (b) a controller for an agricultural robot. We show how the generated abstractions can be composed with the downstream modules and then the resulting abstract system can be verified using program analysis tools like CBMC. Detailed evaluations of the impacts of size, safety requirements, and the environmental parameters (e.g., lighting, road surface, plant type) on the precision of the generated abstractions suggest that the approach can help guide the search for corner cases and safe operating envelops.