Excess Risk Bounds for the Bayes Risk using Variational Inference in Latent Gaussian Models
–Neural Information Processing Systems
Bayesian models are established as one of the main successful paradigms for complex problems in machine learning. To handle intractable inference, research in this area has developed new approximation methods that are fast and effective. However, theoretical analysis of the performance of such approximations is not well developed. The paper furthers such analysis by providing bounds on the excess risk of variational inference algorithms and related regularized loss minimization algorithms for a large class of latent variable models with Gaussian latent variables. We strengthen previous results for variational algorithms by showing they are competitive with any point-estimate predictor.
Neural Information Processing Systems
Feb-14-2020, 16:56:58 GMT
- Technology: