gaussianprocess
Approximate Inference Turns Deep Networks into Gaussian Processes
Mohammad Emtiyaz E. Khan, Alexander Immer, Ehsan Abedi, Maciej Korzepa
We present theoretical results aimed at connecting the training methods of deep learning and GP models. We show that the Gaussian posterior approximations for Bayesian DNNs, such as those obtained by Laplace approximation and variational inference (VI), are equivalent to posterior distributions ofGPregression models.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Europe > Switzerland > Vaud > Lausanne (0.05)
- (2 more...)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Middle East > Jordan (0.04)
Information-TheoreticSafeExplorationwith GaussianProcesses
Acommon approach istoplace aGaussian process prior on the unknown constraint and allowevaluations only inregions that are safe with high probability. Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case. Moreover, the way in which they exploit regularity assumptions about the constraint introduces an additional critical hyperparameter.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Europe > Germany (0.05)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe (0.04)
- Asia > Middle East > Israel (0.05)
- North America > United States > Virginia (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.05)
- Africa > Nigeria (0.04)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- (6 more...)
GaussianProcesses.jl: A Nonparametric Bayes package for the Julia Language
Fairbrother, Jamie, Nemeth, Christopher, Rischard, Maxime, Brea, Johanni
Gaussian processes (GPs) are a family of stochastic processes which provide a flexible nonparametric tool for modelling data. In the most basic setting, a Gaussian process models a latent function based on a finite set of observations. The Gaussian process can be viewed as an extension of a multivariate Gaussian distribution to an infinite number of dimensions, where any finite combination of dimensions will result in a multivariate Gaussian distribution, which is completely specified its mean and covariance functions. The choice of mean and covariance function (also known as the kernel) impose smoothness assumptions on the latent function of interest and determine the correlation between output observations y as a function of the Euclidean distance between their respective input data points x. Gaussian processes have been widely used across a vast range of scientific and industrial fields, for example, to model astronomical time series (Foreman-Mackey et al., 2017) and brain networks (Wang et al., 2017), or for improved soil mapping (Gonzalez et al., 2007) and robotic control (Deisenroth et al., 2015). Arguably, the success of Gaussian processes in these various fields stems from the ease with which scientists and practitioners can apply Gaussian processes to their problems, as well as the general flexibility afforded to GPs for modelling various data forms.
- Europe > United Kingdom > England > Lancashire > Lancaster (0.04)
- North America > United States > Hawaii (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)