Reviews: Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization

Neural Information Processing Systems 

This paper introduces a method for regularizing deep neural network by noise. The core of the approach is to draw connections between applying a random perturbation to layer activations and the optimization of a lower-bound objective function. Experiments for four visual tasks are carried out, and show a slight improvement of the proposed method compared to dropout. On the positive side: - The problem of regularization for training deep neural network is a crucial issue, which has a huge potential practical and theoretical impact. On the negative side: - The aforementioned connection between regularization by noise and training objective lower bounding seems to be a straightforward adaptation of [9] in the case of deep neural networks. For the most important result given in Eq (6), i.e. the fact that using several noise sampling operations gives a tighter bound on the objective function than using a single random sampling (as done in dropout), the authors refer to the derivation in [9].