Well File:

 Aman Sinha




Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation

Neural Information Processing Systems

While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims. We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms. Using adaptive importance-sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. We demonstrate our framework on a highway scenario, accelerating system evaluation by 2-20 times over naive Monte Carlo sampling methods and 10-300P times (where P is the number of processors) over real-world testing.



main

Neural Information Processing Systems

In this section, we provide a brief overview of HMC as well as the specific rendition, split HMC [92]. Given "position" variables x and "momentum" variables v, we define the Hamiltonian for a dynamical system as H(x, v) which can usually be written as U(x)+K(v), where U(x) is the potential energy and K(v) is the kinetic energy. We then simulate the Hamiltonian, which is given by the partial differential equations: ẋ = @H @H, v = @v @x. Of course, this must be done in discrete time for most Hamiltonians that are not perfectly integrable. One notable exception is when x is Gaussian, in which case the dynamical system corresponds to the evolution of a simple harmonic oscillator (i.e. a spring-mass system).



Learning Kernels with Random Features

Neural Information Processing Systems

Randomized features provide a computationally efficient way to approximate kernel machines in machine learning tasks. However, such methods require a user-defined kernel as input. We extend the randomized-feature approach to the task of learning a kernel (via its associated random features). Specifically, we present an efficient optimization problem that learns a kernel in a supervised manner. We prove the consistency of the estimated kernel as well as generalization bounds for the class of estimators induced by the optimized kernel, and we experimentally evaluate our technique on several datasets. Our approach is efficient and highly scalable, and we attain competitive results with a fraction of the training cost of other techniques.