Uncertainty Propagation in Deep Neural Networks Using Extended Kalman Filtering

Titensky, Jessica S., Jananthan, Hayden, Kepner, Jeremy

arXiv.org Machine Learning 

Abstract--Extended Kalman Filtering (EKF) can be used to propagate and quantify input uncertainty through a Deep Neural Network (DNN) assuming mild hypotheses on the input distribution. This methodology yields results comparable to existing methods of uncertainty propagation for DNNs while lowering the computational overhead considerably. Additionally, EKF allows model error to be naturally incorporated into the output uncertainty. This question tends to come up during confidence scoring in areas such as automatic speech recognition where things like background noise can distort the input signal [1]. However, it can be approximated by a Gaussian and modified later if necessary [2].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found