While machine learning systems show high success rate in many complex tasks, research shows they can also fail in very unexpected situations. Rise of machine learning products in safety-critical industries cause an increase in attention in evaluating model robustness and estimating failure probability in machine learning systems. In this work, we propose a design to train a student model -- a failure predictor -- to predict the main model's error for input instances based on their saliency map. We implement and review the preliminary results of our failure predictor model on an autonomous vehicle steering control system as an example of safety-critical applications.
We consider the challenging problem of high speed autonomous racing in a realistic Formula One environment. DeepRacing is a novel end-to-end framework, and a virtual testbed for training and evaluating algorithms for autonomous racing. The virtual testbed is implemented using the realistic F1 series of video games, developed by Codemasters, which many Formula One drivers use for training. This virtual testbed is released under an open-source license both as a standalone C++ API and as a binding to the popular Robot Operating System 2 (ROS2) framework. This open-source API allows anyone to use the high fidelity physics and photo-realistic capabilities of the F1 game as a simulator, and without hacking any game engine code. We use this framework to evaluate several neural network methodologies for autonomous racing. Specifically, we consider several fully end-to-end models that directly predict steering and acceleration commands for an autonomous race car as well as a model that predicts a list of waypoints to follow in the car's local coordinate system, with the task of selecting a steering/throttle angle left to a classical control algorithm. We also present a novel method of autonomous racing by training a deep neural network to predict a parameterized representation of a trajectory rather than a list of waypoints. We evaluate these models performance in our open-source simulator and show that trajectory prediction far outperforms end-to-end driving. Additionally, we show that open-loop performance for an end-to-end model, i.e. root-mean-square error for a model's predicted control values, does not necessarily correlate with increased driving performance in the closed-loop sense, i.e. actual ability to race around a track. Finally, we show that our proposed model of parameterized trajectory prediction outperforms both end-to-end control and waypoint prediction.
It's just not practical to program a car to drive itself in every environment, given the nearly infinite range of possible variables involved. But, thanks to AI, we can show it how to drive. And, unlike your teenager, you can then see what it's paying attention to. With NVIDIA PilotNet, we created a neural-network-based system that learns to steer a car by observing what people do. We developed a method for the network to tell us what it prioritized when making driving decisions.
The first time I watched the DARPA challenge for self-driving cars I thought this was a breakthrough I want to be involved in. But to my surprise, the challenge was in 2005 -- so long ago and yet nothing came to the market for a long time. Moreover, Artificial Neural Networks -- the algorithms to process all this data was developed in the 1960's (!!). So what happened all of a sudden and what has changed to make self-driving cars (and AI) go forward? The answer is Nvidia happened, aka GPU chip designs (Graphics Processing Units) -- even if at the time the big breakthrough happened, Nvidia was not aware they made this huge contribution.