Global SNR Estimation of Speech Signals using Entropy and Uncertainty Estimates from Dropout Networks

arXiv.org Artificial Intelligence

This paper demonstrates two novel methods to estimate the global SNR of speech signals. In both methods, Deep Neural Network-Hidden Markov Model (DNN-HMM) acoustic model used in speech recognition systems is leveraged for the additional task of SNR estimation. In the first method, the entropy of the DNN-HMM output is computed. Recent work on bayesian deep learning has shown that a DNN-HMM trained with dropout can be used to estimate model uncertainty by approximating it as a deep Gaussian process. In the second method, this approximation is used to obtain model uncertainty estimates. Noise specific regressors are used to predict the SNR from the entropy and model uncertainty. The DNN-HMM is trained on GRID corpus and tested on different noise profiles from the DEMAND noise database at SNR levels ranging from -10 dB to 30 dB.


Deep and Confident Prediction for Time Series at Uber

arXiv.org Machine Learning

Reliable uncertainty estimation for time series prediction is critical in many fields, including physics, biology, and manufacturing. At Uber, probabilistic time series forecasting is used for robust prediction of number of trips during special events, driver incentive allocation, as well as real-time anomaly detection across millions of metrics. Classical time series models are often used in conjunction with a probabilistic formulation for uncertainty estimation. However, such models are hard to tune, scale, and add exogenous variables to. Motivated by the recent resurgence of Long Short Term Memory networks, we propose a novel end-to-end Bayesian deep model that provides time series prediction along with uncertainty estimation. We provide detailed experiments of the proposed solution on completed trips data, and successfully apply it to large-scale time series anomaly detection at Uber.


Engineering Uncertainty Estimation in Neural Networks for Time Series Prediction at Uber

@machinelearnbot

Accurate time series forecasting during high variance segments (e.g., holidays and sporting events) is critical for anomaly detection, resource allocation, budget planning, and other related tasks necessary to facilitate optimal Uber user experiences at scale. Forecasting these variables, however, can be challenging because extreme event prediction depends on weather, city population growth, and other external factors that contribute to forecast uncertainty. In recent years, the Long Short Term Memory (LSTM) technique has become a popular time series modeling framework due to its end-to-end modeling, ease of incorporating exogenous variables, and automatic feature extraction abilities. Uncertainty estimation in deep learning remains a less trodden but increasingly important component of assessing forecast prediction truth in LSTM models. Through our research, we found that a neural network forecasting model is able to outperform classical time series methods in use cases with long, interdependent time series. While beneficial in other ways, our new model did not offer insights into prediction uncertainty, which helps determine how much we can trust the forecast.


Concrete Dropout

arXiv.org Machine Learning

Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary - a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout's discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.


Deep Learning Is Not Good Enough, We Need Bayesian Deep Learning for Safe AI

#artificialintelligence

These results show that when we train on less data, or test on data which is significantly different from the training set, then our epistemic uncertainty increases drastically. However, our aleatoric uncertainty remains relatively constant, which it should because it is tested on the same problem with the same sensor. Next I'm going to discuss an interesting application of these ideas for multi-task learning. Multi-task learning aims to improve learning efficiency and prediction accuracy by learning multiple objectives from a shared representation. It is prevalent in many areas of machine learning, from NLP to speech recognition to computer vision.