Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo
Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes
–Neural Information Processing Systems
Deep Gaussian Processes (DGPs) are hierarchical generalizations of Gaussian Processes that combine well calibrated uncertainty estimates with the high flexibility of multilayer models. One of the biggest challenges with these models is that exact inference is intractable. The current state-of-the-art inference method, Variational Inference (VI), employs a Gaussian approximation to the posterior distribution. This can be a potentially poor unimodal approximation of the generally multimodal posterior. In this work, we provide evidence for the non-Gaussian nature of the posterior and we apply the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples. To efficiently optimize the hyperparameters, we introduce the Moving Window MCEM algorithm.
Neural Information Processing Systems
May-26-2025, 06:01:55 GMT
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.14)
- North America
- Canada (0.14)
- United States (0.14)
- Europe > United Kingdom
- Genre:
- Research Report (0.47)
- Technology: