Generalization Error Bounds for Deep Variational Inference
Chérief-Abdellatif, Badr-Eddine
Variational inference is becoming more and more popular for approximating intractable posterior distributions in Bayesian statistics and machine learning. Meanwhile, a few recent works have provided theoretical justification and new insights on deep neural networks for estimating smooth functions in usual settings such as nonparametric regression. In this paper, we show that variational inference for sparse deep learning retains the same generalization properties than exact Bayesian inference. In particular, we highlight the connection between estimation and approximation theories via the classical bias-variance trade-off and show that it leads to near-minimax rates of convergence for H\"older smooth functions. Additionally, we show that the model selection framework over the neural network architecture via ELBO maximization does not overfit and adaptively achieves the optimal rate of convergence.
Aug-9-2019
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- Spain > Canary Islands (0.04)
- Sweden > Stockholm
- Stockholm (0.04)
- United Kingdom
- England > Cambridgeshire
- Cambridge (0.04)
- North Sea > Southern North Sea (0.04)
- England > Cambridgeshire
- North America
- Canada
- British Columbia > Metro Vancouver Regional District
- Vancouver (0.04)
- Ontario > Toronto (0.14)
- British Columbia > Metro Vancouver Regional District
- United States
- California
- Los Angeles County > Long Beach (0.04)
- San Francisco County > San Francisco (0.14)
- New York > New York County
- New York City (0.04)
- California
- Canada
- Asia > Middle East
- Genre:
- Instructional Material (0.46)
- Research Report (0.64)
- Industry:
- Education (0.46)