Goto

Collaborating Authors

 vbmc




VariationalBayesianMonteCarlo withNoisyLikelihoods

Neural Information Processing Systems

Intheoriginalformulation, observations are assumed to be exact (non-noisy), so the GP likelihood only included a small observation noise σ2obs for numerical stability [32].


Variational Bayesian Monte Carlo

Neural Information Processing Systems

Many probabilistic models of interest in scientific computing and machine learning have expensive, black-box likelihoods that prevent the application of standard techniques for Bayesian inference, such as MCMC, which would require access to the gradient or a large number of likelihood evaluations. We introduce here a novel sample-efficient inference framework, Variational Bayesian Monte Carlo (VBMC). VBMC combines variational inference with Gaussian-process based, active-sampling Bayesian quadrature, using the latter to efficiently approximate the intractable integral in the variational objective. Our method produces both a nonparametric approximation of the posterior distribution and an approximate lower bound of the model evidence, useful for model selection. We demonstrate VBMC both on several synthetic likelihoods and on a neuronal model with data from real neurons. Across all tested problems and dimensions (up to D = 10), VBMC performs consistently well in reconstructing the posterior and the model evidence with a limited budget of likelihood evaluations, unlike other methods that work only in very low dimensions. Our framework shows great promise as a novel tool for posterior and model inference with expensive, black-box likelihoods.


Variational Bayesian Monte Carlo

Luigi Acerbi

Neural Information Processing Systems

We introduce here a novel sample-efficient inference framework, V ariational Bayesian Monte Carlo (VBMC). VBMC combines variational inference with Gaussian-process based, active-sampling Bayesian quadrature, using the latter to efficiently approximate the intractable integral in the variational objective.




relevant to the NeurIPS community " (R1) and to be addressing " a well-motivated problem " (R3) in " an important area "

Neural Information Processing Systems

We thank the reviewers for their useful and thoughtful feedback. We are glad to see that our work was found " highly That is, " the experiments show that the proposed method outperforms the baselines and We address the reviewers' comments below and ' we refer to our method for noisy inference. " How does the estimation error [of Will it change the claims of the paper? Thus, our claims are unaffected. While we share the reviewers' desire for convergence guarantees, we also note that Thus, we go from Eq. 7 to Eq. 8 by swapping Re. the noiseless case, see Figure 1 In Eq. 7, the prior density is included in The ablation studies are mentioned in the text, and fully reported in the Supplement (E.3, 'Lesion study').


Stacking Variational Bayesian Monte Carlo

Silvestrin, Francesco, Li, Chengkun, Acerbi, Luigi

arXiv.org Machine Learning

Variational Bayesian Monte Carlo (VBMC) is a sample-efficient method for approximate Bayesian inference with computationally expensive likelihoods. While VBMC's local surrogate approach provides stable approximations, its conservative exploration strategy and limited evaluation budget can cause it to miss regions of complex posteriors. In this work, we introduce Stacking Variational Bayesian Monte Carlo (S-VBMC), a method that constructs global posterior approximations by merging independent VBMC runs through a principled and inexpensive post-processing step. Our approach leverages VBMC's mixture posterior representation and per-component evidence estimates, requiring no additional likelihood evaluations while being naturally parallelizable. We demonstrate S-VBMC's effectiveness on two synthetic problems designed to challenge VBMC's exploration capabilities and two real-world applications from computational neuroscience, showing substantial improvements in posterior approximation quality across all cases.


Variational Bayesian Monte Carlo with Noisy Likelihoods

Neural Information Processing Systems

Variational Bayesian Monte Carlo (VBMC) is a recently introduced framework that uses Gaussian process surrogates to perform approximate Bayesian inference in models with black-box, non-cheap likelihoods. In this work, we extend VBMC to deal with noisy log-likelihood evaluations, such as those arising from simulation-based models. We introduce new global' acquisition functions, such as expected information gain (EIG) and variational interquantile range (VIQR), which are robust to noise and can be efficiently evaluated within the VBMC setting. In a novel, challenging, noisy-inference benchmark comprising of a variety of models with real datasets from computational and cognitive neuroscience, VBMC VIQR achieves state-of-the-art performance in recovering the ground-truth posteriors and model evidence.