Aman Sinha
Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation
Matthew O'Kelly, Aman Sinha, Hongseok Namkoong, Russ Tedrake, John C. Duchi
While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims. We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms. Using adaptive importance-sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. We demonstrate our framework on a highway scenario, accelerating system evaluation by 2-20 times over naive Monte Carlo sampling methods and 10-300P times (where P is the number of processors) over real-world testing.
main
Aman Sinha
In this section, we provide a brief overview of HMC as well as the specific rendition, split HMC [92]. Given "position" variables x and "momentum" variables v, we define the Hamiltonian for a dynamical system as H(x, v) which can usually be written as U(x)+K(v), where U(x) is the potential energy and K(v) is the kinetic energy. We then simulate the Hamiltonian, which is given by the partial differential equations: แบ = @H @H, v = @v @x. Of course, this must be done in discrete time for most Hamiltonians that are not perfectly integrable. One notable exception is when x is Gaussian, in which case the dynamical system corresponds to the evolution of a simple harmonic oscillator (i.e. a spring-mass system).
main
Aman Sinha
In this section, we provide a brief overview of HMC as well as the specific rendition, split HMC [92]. Given "position" variables x and "momentum" variables v, we define the Hamiltonian for a dynamical system as H(x, v) which can usually be written as U(x)+K(v), where U(x) is the potential energy and K(v) is the kinetic energy. We then simulate the Hamiltonian, which is given by the partial differential equations: แบ = @H @H, v = @v @x. Of course, this must be done in discrete time for most Hamiltonians that are not perfectly integrable. One notable exception is when x is Gaussian, in which case the dynamical system corresponds to the evolution of a simple harmonic oscillator (i.e. a spring-mass system).
main
Aman Sinha
Learning-based methodologies increasingly find applications in safety-critical domains like autonomous driving and medical robotics. Due to the rare nature of dangerous events, real-world testing is prohibitively expensive and unscalable. In this work, we employ a probabilistic approach to safety evaluation in simulation, where we are concerned with computing the probability of dangerous events. We develop a novel rare-event simulation method that combines exploration, exploitation, and optimization techniques to find failure modes and estimate their rate of occurrence. We provide rigorous guarantees for the performance of our method in terms of both statistical and computational efficiency. Finally, we demonstrate the efficacy of our approach on a variety of scenarios, illustrating its usefulness as a tool for rapid sensitivity analysis and model comparison that are essential to developing and testing safety-critical autonomous systems.
Learning Kernels with Random Features
Aman Sinha, John C. Duchi
Randomized features provide a computationally efficient way to approximate kernel machines in machine learning tasks. However, such methods require a user-defined kernel as input. We extend the randomized-feature approach to the task of learning a kernel (via its associated random features). Specifically, we present an efficient optimization problem that learns a kernel in a supervised manner. We prove the consistency of the estimated kernel as well as generalization bounds for the class of estimators induced by the optimized kernel, and we experimentally evaluate our technique on several datasets. Our approach is efficient and highly scalable, and we attain competitive results with a fraction of the training cost of other techniques.