End-to-end learning has recently emerged as a promising technique to tackle the problem of autonomous driving. Existing works show that learning a navigation policy from raw sensor data may reduce the system's reliance on external sensing systems, (e.g. GPS), and/or outperform traditional methods based on state estimation and planning. However, existing end-to-end methods generally trade off performance for safety, hindering their diffusion to real-life applications. For example, when confronted with an input which is radically different from the training data, end-to-end autonomous driving systems are likely to fail, compromising the safety of the vehicle. To detect such failure cases, this work proposes a general framework for uncertainty estimation which enables a policy trained end-to-end to predict not only action commands, but also a confidence about its own predictions. In contrast to previous works, our framework can be applied to any existing neural network and task, without the need to change the network's architecture or loss, or to train the network. In order to do so, we generate confidence levels by forward propagation of input and model uncertainties using Bayesian inference. We test our framework on the task of steering angle regression for an autonomous car, and compare our approach to existing methods with both qualitative and quantitative results on a real dataset. Finally, we show an interesting by-product of our framework: robustness against adversarial attacks.
Variational inference for Bayesian deep neural networks (DNNs) requires specifying priors and approximate posterior distributions for neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational inference to deeper architectures involving high dimensional weight space. We propose Bayesian MOdel Priors Extracted from Deterministic DNN (MOPED) method for stochastic variational inference to choose meaningful prior distributions over weight space using deterministic weights derived from the pretrained DNNs of equivalent architecture. We evaluate the proposed approach on multiple datasets and real-world application domains with a range of varying complex model architectures to demonstrate MOPED enables scalable variational inference for Bayesian DNNs. The proposed method achieves faster training convergence and provides reliable uncertainty quantification, without compromising on the accuracy provided by the deterministic DNNs. We also propose hybrid architectures to Bayesian DNNs where deterministic and variational layers are combined to balance computation complexity during prediction phase and while providing benefits of Bayesian inference. We will release the source code for this work.
Traffic signals serve to regulate the worst bottlenecks in highly populated areas but are not always very effective. Researchers at Penn State are hoping to use deep reinforcement learning to improve traffic signal efficiency in urban areas, thanks to a one-year, $22,443 Penn State Institute for CyberScience Seed Grant. Urban traffic congestion currently costs the U.S. economy $160 billion in lost productivity and causes 3.1 billion gallons of wasted fuel and 56 billion pounds of harmful CO2 emissions, according to the 2015 Urban Mobility Scorecard. Vikash Gayah, associate professor of civil engineering, and Zhenhui "Jessie" Li, associate professor of information sciences and technology, aim to tackle this issue by first identifying machine learning algorithms that will provide results consistent with traditional (theoretical) solutions for simple scenerios, and then building upon those algorithms by introducing complexities that cannot be readily addressed through traditional means. "Typically, we would go out and do traffic counts for an hour at certain peak times of day and that would determine signal timings for the next year, but not every day looks like that hour, and so we get inefficiency," Gayah said.
One week into my Research Science role at Lyft, I merged my first pull request into the Fraud team's code repository and deployed our fraud decision service. No, it wasn't to launch a groundbreaking user behavior activity-based convolutional recurrent neural network trained in a semi-supervised, adversarial fashion that challenges a user to prove her identity -- it would be a couple of years before that. Embarrassingly, it was to remove a duplicate line of feature coefficients in a hand-coded logistic regression model rolled out a little less than a year before. This small bug exposed a number of limitations of a system built primarily for a different type of usage -- that of business rules that encapsulate simple, human-readable handcrafted logic. In our old worldview, models were simply extensions of business rules.
DeepFakes are created by a deep learning technique known as Generative Adversarial Networks (GANs), where two machine learning models are used to make the counterfeits more believable. By studying the images and videos of a person, in the form of training data, the first model creates a video, while the second model attempts to detect its flaws. These two models work hand-in-hand until they create a video that is believable. DeepFake opens up a whole new world when it comes to unsupervised learning, which is a sub-field of machine learning where machines can learn to teach themselves, and it has been argued to hold great promise when it comes to self-driving vehicles' to detect and recognize obstacles on the road and virtual assistants such as Siri, Cortana and Alexa learning to be more conversational. The real question is, what potential does it have of being misused, like any other technology.