Goto

Collaborating Authors

 gaussianprocess


Approximate Inference Turns Deep Networks into Gaussian Processes

Mohammad Emtiyaz E. Khan, Alexander Immer, Ehsan Abedi, Maciej Korzepa

Neural Information Processing Systems

We present theoretical results aimed at connecting the training methods of deep learning and GP models. We show that the Gaussian posterior approximations for Bayesian DNNs, such as those obtained by Laplace approximation and variational inference (VI), are equivalent to posterior distributions ofGPregression models.



Information-TheoreticSafeExplorationwith GaussianProcesses

Neural Information Processing Systems

Acommon approach istoplace aGaussian process prior on the unknown constraint and allowevaluations only inregions that are safe with high probability. Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case. Moreover, the way in which they exploit regularity assumptions about the constraint introduces an additional critical hyperparameter.


Sparse or

Neural Information Processing Systems

Table evaluated hyperparameters Dataset Nd GPR |M| - - q() - - free-form Boston 506 13 3.049 Concrete 1030 8 4.864 Ener 768 8 0.441 WineRed1599 11 0.640 Yacht308 6 0.353


sup

Neural Information Processing Systems

C.1 2DSyntheticBenchmark For both benchmarks, we sample 500 observationsxi=(x1i,x2i)from each of the twoin-domain classes (orange and blue), and consider a deep architecture ResFFN-12-128, which contains 12 residual feedforward layers with 128 hidden units and dropout rate 0.01.


PersonalizedFederatedLearningwith GaussianProcesses

Neural Information Processing Systems

GPs are highly expressive models that work well in the low data regime due to their Bayesian nature. However, applying GPs to PFL raises multiple challenges. Mainly, GPs performance depends heavily on access to a good kernel function, and learning a kernel requires a large training set.




GaussianProcesses.jl: A Nonparametric Bayes package for the Julia Language

Fairbrother, Jamie, Nemeth, Christopher, Rischard, Maxime, Brea, Johanni

arXiv.org Machine Learning

Gaussian processes (GPs) are a family of stochastic processes which provide a flexible nonparametric tool for modelling data. In the most basic setting, a Gaussian process models a latent function based on a finite set of observations. The Gaussian process can be viewed as an extension of a multivariate Gaussian distribution to an infinite number of dimensions, where any finite combination of dimensions will result in a multivariate Gaussian distribution, which is completely specified its mean and covariance functions. The choice of mean and covariance function (also known as the kernel) impose smoothness assumptions on the latent function of interest and determine the correlation between output observations y as a function of the Euclidean distance between their respective input data points x. Gaussian processes have been widely used across a vast range of scientific and industrial fields, for example, to model astronomical time series (Foreman-Mackey et al., 2017) and brain networks (Wang et al., 2017), or for improved soil mapping (Gonzalez et al., 2007) and robotic control (Deisenroth et al., 2015). Arguably, the success of Gaussian processes in these various fields stems from the ease with which scientists and practitioners can apply Gaussian processes to their problems, as well as the general flexibility afforded to GPs for modelling various data forms.