Goto

Collaborating Authors

 bayesian quadrature



BayesSum: Bayesian Quadrature in Discrete Spaces

Kang, Sophia Seulkee, Briol, François-Xavier, Karvonen, Toni, Chen, Zonghao

arXiv.org Machine Learning

This paper addresses the challenging computational problem of estimating intractable expectations over discrete domains. Existing approaches, including Monte Carlo and Russian Roulette estimators, are consistent but often require a large number of samples to achieve accurate results. We propose a novel estimator, \emph{BayesSum}, which is an extension of Bayesian quadrature to discrete domains. It is more sample efficient than alternatives due to its ability to make use of prior information about the integrand through a Gaussian process. We show this through theory, deriving a convergence rate significantly faster than Monte Carlo in a broad range of settings. We also demonstrate empirically that our proposed method does indeed require fewer samples on several synthetic settings as well as for parameter estimation for Conway-Maxwell-Poisson and Potts models.


Variational Bayesian Monte Carlo

Luigi Acerbi

Neural Information Processing Systems

We introduce here a novel sample-efficient inference framework, V ariational Bayesian Monte Carlo (VBMC). VBMC combines variational inference with Gaussian-process based, active-sampling Bayesian quadrature, using the latter to efficiently approximate the intractable integral in the variational objective.







Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Overview: this paper presents a fast alternative to MC methods for approximating intractable integrals. The main idea behind Bayesian quadrature is to exploit assumptions and regularities in the likelihood surface, something which pure Monte Carlo ignores. Samples are then drawn according to some criterion - in this case, samples are chosen to the location of the maximal expected posterior variance of the integrand. Intuitively, this is a location where the model knows the least about the value of the integrand, and stands to gain a lot of information.