Notes on the Behavior of MC Dropout

Verdoja, Francesco, Kyrki, Ville

arXiv.org Machine Learning 

The increasing interest in the deployment of deep learning solutions in real safety-critical applications ranging from hearthcare to robotics and autonomous vehicles is making apparent the importance to properly estimate the uncertainty of the predictions made by deep neural networks [1, 2]. While most common neural network architectures only provide point estimates, uncertainty can be evaluated with Bayesian neural networks (BNNs) [3, 4] where the deterministic weights used in the majority of neural networks are replaced with distributions over the network parameters. Although the formulation of BNNs is relatively easy in theory, their use in practise for most complex problems is often unfeasible due to their need to analytically evaluate the marginal probabilies during training which becomes quickly intractable. Recently, variational inference methods have been proposed as a practical alternative to BNNs, but most of these formulations requires double the number of parameters of a network to represent its uncertainty which leads to increased computational costs [5, 6]. Another very popular option to model uncertainty in deep neural networks is the use of dropout as a way to approximate Bayesian variational inference [6].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found