Goto

Collaborating Authors

Tree pyramidal adaptive importance sampling

arXiv.org Machine Learning

This paper introduces Tree-Pyramidal Adaptive Importance Sampling (TP-AIS), a novel iterated sampling method that outperforms current state-of-the-art approaches. TP-AIS iteratively builds a proposal distribution parameterized by a tree pyramid, where each tree leaf spans a convex subspace and represents it's importance density. After each new sample operation, a set of tree leaves are subdivided improving the approximation of the proposal distribution to the target density. Unlike the rest of the methods in the literature, TP-AIS is parameter free and requires zero manual tuning to achieve its best performance. Our proposed method is evaluated with different complexity randomized target probability density functions and also analyze its application to different dimensions. The results are compared to state-of-the-art iterative importance sampling approaches and other baseline MCMC approaches using Normalized Effective Sample Size (N-ESS), Jensen-Shannon Divergence to the target posterior, and time complexity.


The Gaussian Process Density Sampler

Neural Information Processing Systems

We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skull-reconstruction task.


Metropolis-Hastings Generative Adversarial Networks

arXiv.org Machine Learning

We introduce the Metropolis-Hastings generative adversarial network (MH-GAN), which combines aspects of Markov chain Monte Carlo and GANs. The MH-GAN draws samples from the distribution implicitly defined by a GAN's discriminator-generator pair, as opposed to sampling in a standard GAN which draws samples from the distribution defined by the generator. It uses the discriminator from GAN training to build a wrapper around the generator for improved sampling. With a perfect discriminator, this wrapped generator samples from the true distribution on the data exactly even when the generator is imperfect. We demonstrate the benefits of the improved generator on multiple benchmark datasets, including CIFAR-10 and CelebA, using DCGAN and WGAN.


Gradient Importance Sampling

arXiv.org Machine Learning

Adaptive Monte Carlo schemes developed over the last years usually seek to ensure ergodicity of the sampling process in line with MCMC tradition. This poses constraints on what is possible in terms of adaptation. In the general case ergodicity can only be guaranteed if adaptation is diminished at a certain rate. Importance Sampling approaches offer a way to circumvent this limitation and design sampling algorithms that keep adapting. Here I present a gradient informed variant of SMC (and its special case Population Monte Carlo) for static problems.


Elliptical slice sampling

arXiv.org Machine Learning

Many probabilistic models introduce strong dependencies between variables using a latent multivariate Gaussian distribution or a Gaussian process. We present a new Markov chain Monte Carlo algorithm for performing inference in models with multivariate Gaussian priors. Its key properties are: 1) it has simple, generic code applicable to many models, 2) it has no free parameters, 3) it works well for a variety of Gaussian process based models. These properties make our method ideal for use while model building, removing the need to spend time deriving and tuning updates for more complex algorithms.