Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation
O'Kelly, Matthew, Sinha, Aman, Namkoong, Hongseok, Duchi, John, Tedrake, Russ
Recent breakthroughs in deep learning have accelerated the development of autonomous vehicles (AVs); many research prototypes now operate on real roads alongside human drivers. While advances in computer-vision techniques have made human-level performance possible on narrow perception tasks such as object recognition, several fatal accidents involving AVs underscore the importance of testing whether the perception and control pipeline--when considered as a whole system--can safely interact with humans. Unfortunately, testing AVs in real environments, the most straightforward validation framework for system-level input-output behavior, requires prohibitive amounts of time due to the rare nature of serious accidents [49]. Concretely, a recent study [29] argues that AVs need to drive "hundreds of millions of miles and, under some scenarios, hundreds of billions of miles to create enough data to clearly demonstrate their safety." Alteratively, formally verifying an AV algorithm's "correctness" [34, 2, 47, 37] is difficult since all driving policies are subject to crashes caused by other drivers [49]. It is unreasonable to ask that the policy be safe under all scenarios. Unfortunately, ruling out scenarios where the AV should not be blamed is a task subject to logical inconsistency, combinatorial growth in specification complexity, and subjective assignment of fault. Motivated by the challenges underlying real-world testing and formal verification, we consider a probabilistic paradigm--which we call a risk-based framework--where our goal is to evaluate the probability of an accident under a base distribution representing standard traffic behavior.
Oct-31-2018
- Country:
- North America > United States
- California (0.14)
- Massachusetts (0.14)
- North America > United States
- Genre:
- Research Report (1.00)
- Industry:
- Technology: