efficient variational inference
Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee
Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks, and to recover the sparse structure of target functions. Although tremendous empirical successes have been achieved, most sparse deep learning algorithms are lacking of theoretical supports. On the other hand, another line of works have proposed theoretical frameworks that are computationally infeasible. In this paper, we train sparse deep neural networks with a fully Bayesian treatment under spike-and-slab priors, and develop a set of computationally efficient variational inferences via continuous relaxation of Bernoulli distribution. The variational posterior contraction rate is provided, which justifies the consistency of the proposed variational Bayes method. Interestingly, our empirical results demonstrate that this variational procedure provides uncertainty quantification in terms of Bayesian predictive distribution and is also capable to accomplish consistent variable selection by training a sparse multi-layer neural network.
Review for NeurIPS paper: Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee
The reviewers agree that this is interesting, rigorous, and novel work that explores sparsity in deep neural networks. While the assumptions required for consistency are not as general as one would hope, this paper lays the groundwork for new research directions. One weakness that the reviewers wish to see addressed is the lack of discussion of similar approaches in the BNN literature and how one might extend this approach to more complex models.
Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee
Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks, and to recover the sparse structure of target functions. Although tremendous empirical successes have been achieved, most sparse deep learning algorithms are lacking of theoretical supports. On the other hand, another line of works have proposed theoretical frameworks that are computationally infeasible. In this paper, we train sparse deep neural networks with a fully Bayesian treatment under spike-and-slab priors, and develop a set of computationally efficient variational inferences via continuous relaxation of Bernoulli distribution. The variational posterior contraction rate is provided, which justifies the consistency of the proposed variational Bayes method.