Goto

Collaborating Authors

 Picchini, Umberto


Fast, accurate and lightweight sequential simulation-based inference using Gaussian locally linear mappings

arXiv.org Machine Learning

Bayesian inference for complex models with an intractable likelihood can be tackled using algorithms performing many calls to computer simulators. These approaches are collectively known as "simulation-based inference" (SBI). Recent SBI methods have made use of neural networks (NN) to provide approximate, yet expressive constructs for the unavailable likelihood function and the posterior distribution. However, they do not generally achieve an optimal trade-off between accuracy and computational demand. In this work, we propose an alternative that provides both approximations to the likelihood and the posterior distribution, using structured mixtures of probability distributions. Our approach produces accurate posterior inference when compared to state-of-the-art NN-based SBI methods, while exhibiting a much smaller computational footprint. We illustrate our results on several benchmark models from the SBI literature.


JANA: Jointly Amortized Neural Approximation of Complex Bayesian Models

arXiv.org Artificial Intelligence

Neural networks trained on model simulations enable amortized inference: A pre-trained network can be stored and re-used for Bayesian inference on millions of data sets (von This work proposes "jointly amortized neural Krause et al., 2022). Crucially, most previous neural approaches approximation" (JANA) of intractable likelihood have tackled either SM or SBI in isolation, but little functions and posterior densities arising in attention has been paid to learning both tasks simultaneously. Bayesian surrogate modeling and simulation-based To address this gap, we propose JANA ("Jointly Amortized inference. We train three complementary networks Neural Approximation"), a Bayesian neural framework for in an end-to-end fashion: 1) a summary network simultaneously amortized SM and SBI, and show how it enables to compress individual data points, sets, or time novel solutions to challenging downstream tasks like series into informative embedding vectors; 2) a posterior the estimation of marginal and posterior predictive distributions network to learn an amortized approximate (see Figure 1). JANA also presents a major qualitative posterior; and 3) a likelihood network to learn an upgrade to the BayesFlow framework (Radev et al., 2020), amortized approximate likelihood. Their interaction which was originally designed for amortized SBI alone.


Sequential Neural Posterior and Likelihood Approximation

arXiv.org Machine Learning

We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA is a normalizing flows-based algorithm for inference in implicit models. Thus, SNPLA is a simulation-based inference method that only requires simulations from a generative model. Compared to similar methods, the main advantage of SNPLA is that our method jointly learns both the posterior and the likelihood. SNPLA completely avoid Markov chain Monte Carlo sampling and correction-steps of the parameter proposal function that are introduced in similar methods, but that can be numerically unstable or restrictive. Over four experiments, we show that SNPLA performs competitively when utilizing the same number of model simulations as used in other methods, even though the inference problem for SNPLA is more complex due to the joint learning of posterior and likelihood function.


Partially Exchangeable Networks and Architectures for Learning Summary Statistics in Approximate Bayesian Computation

arXiv.org Machine Learning

We present a novel family of deep neural architectures, named partially exchangeable networks (PENs) that leverage probabilistic symmetries. By design, PENs are invariant to block-switch transformations, which characterize the partial exchangeability properties of conditionally Markovian processes. Moreover, we show that any block-switch invariant function has a PEN-like representation. The DeepSets architecture is a special case of PEN and we can therefore also target fully exchangeable data. We employ PENs to learn summary statistics in approximate Bayesian computation (ABC). When comparing PENs to previous deep learning methods for learning summary statistics, our results are highly competitive, both considering time series and static models. Indeed, PENs provide more reliable posterior samples even when using less training data.