Enhancing Uncertainty Estimation in Semantic Segmentation via Monte-Carlo Frequency Dropout

Zeevi, Tal, Staib, Lawrence H., Onofrey, John A.

arXiv.org Machine Learning 

In convolutional neural network (CNN) layers, commonly used in segmentation tasks, each convolution step Estimating prediction uncertainties in deterministic deep corresponds to a node on the network graph, essentially turning learning models often involves the strategic introduction of Dropout into a random source of impulse noise within controlled artificial noise into the data [1]. This can occur the CNN feature maps. This method, however, may not either before [2, 3] or during [4, 5, 6] neural network processing, comprehensively capture the predictive distribution in medical with subsequent measurement of variations in model imaging, where noise extends into the frequency domain performance to assess robustness. Techniques such as Drop- - a range poorly addressed by impulse noise. Our recent Connect [7] and Dropout [8], which randomly omit network findings [10] suggest that Frequency Dropout [11], which edges or nodes during processing, have been foundational in randomly removes frequency components from feature maps this respect, effectively injecting random patterns of noise during Monte Carlo (MC) simulations, refines predictive uncertainty into the network's operation allowing the simulation of a estimates in medical imaging classification over predictive distribution approximating Bayesian inference [9].