Unscented Gaussian Process Latent Variable Model: learning from uncertain inputs with intractable kernels

arXiv.org Machine Learning

The Gaussian Process (GP) framework flexibility has enabled its use in several data modeling scenarios. The setting where we have unavailable or uncertain inputs that generate possibly noisy observations is usually tackled by the well known Gaussian Process Latent Variable Model (GPLVM). However, the standard variational approach to perform inference with the GPLVM presents some expressions that are tractable for only a few kernel functions, which may hinder its general application. While other quadrature or sampling approaches could be used in that case, they usually are very slow and/or non-deterministic. In the present paper, we propose the use of the unscented transformation to enable the use of any kernel function within the Bayesian GPLVM. Our approach maintains the fully deterministic feature of tractable kernels and presents a simple implementation with only moderate computational cost. Experiments on dimensionality reduction and multistep-ahead prediction with uncertainty propagation indicate the feasibility of our proposal.


Extended and Unscented Gaussian Processes

Neural Information Processing Systems

We present two new methods for inference in Gaussian process (GP) models with general nonlinear likelihoods. Inference is based on a variational framework where a Gaussian posterior is assumed and the likelihood is linearized about the variational posterior mean using either a Taylor series expansion or statistical linearization. We show that the parameter updates obtained by these algorithms are equivalent to the state update equations in the iterative extended and unscented Kalman filters respectively, hence we refer to our algorithms as extended and unscented GPs. The unscented GP treats the likelihood as a 'black-box' by not requiring its derivative for inference, so it also applies to non-differentiable likelihood models. We evaluate the performance of our algorithms on a number of synthetic inversion problems and a binary classification dataset.


On the relation between Gaussian process quadratures and sigma-point methods

arXiv.org Machine Learning

This article is concerned with Gaussian process quadratures, which are numerical integration methods based on Gaussian process regression methods, and sigma-point methods, which are used in advanced non-linear Kalman filtering and smoothing algorithms. We show that many sigma-point methods can be interpreted as Gaussian quadrature based methods with suitably selected covariance functions. We show that this interpretation also extends to more general multivariate Gauss--Hermite integration methods and related spherical cubature rules. Additionally, we discuss different criteria for selecting the sigma-point locations: exactness for multivariate polynomials up to a given order, minimum average error, and quasi-random point sets. The performance of the different methods is tested in numerical experiments.


Bayes Blocks: An Implementation of the Variational Bayesian Building Blocks Framework

arXiv.org Machine Learning

A software library for constructing and learning probabilistic models is presented. The library offers a set of building blocks from which a large variety of static and dynamic models can be built. These include hierarchical models for variances of other variables and many nonlinear models. The underlying variational Bayesian machinery, providing for fast and robust estimation but being mathematically rather involved, is almost completely hidden from the user thus making it very easy to use the library. The building blocks include Gaussian, rectified Gaussian and mixture-of-Gaussians variables and computational nodes which can be combined rather freely.


Expectation Propagation for Neural Networks with Sparsity-promoting Priors

arXiv.org Machine Learning

We propose a novel approach for nonlinear regression using a two-layer neural network (NN) model structure with sparsity-favoring hierarchical priors on the network weights. We present an expectation propagation (EP) approach for approximate integration over the posterior distribution of the weights, the hierarchical scale parameters of the priors, and the residual scale. Using a factorized posterior approximation we derive a computationally efficient algorithm, whose complexity scales similarly to an ensemble of independent sparse linear models. The approach enables flexible definition of weight priors with different sparseness properties such as independent Laplace priors with a common scale parameter or Gaussian automatic relevance determination (ARD) priors with different relevance parameters for all inputs. The approach can be extended beyond standard activation functions and NN model structures to form flexible nonlinear predictors from multiple sparse linear models. The effects of the hierarchical priors and the predictive performance of the algorithm are assessed using both simulated and real-world data. Comparisons are made to two alternative models with ARD priors: a Gaussian process with a NN covariance function and marginal maximum a posteriori estimates of the relevance parameters, and a NN with Markov chain Monte Carlo integration over all the unknown model parameters.