Scalable Training of Inference Networks for Gaussian-Process Models
Shi, Jiaxin, Khan, Mohammad Emtiyaz, Zhu, Jun
Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points. We explore an alternative approximation that employs stochastic inference networks for a flexible inference. Unfortunately, for such networks, minibatch training is difficult to be able to learn meaningful correlations over function outputs for a large dataset. We propose an algorithm that enables such training by tracking a stochastic, functional mirror-descent algorithm. At each iteration, this only requires considering a finite number of input locations, resulting in a scalable and easy-to-implement algorithm. Empirical results show comparable and, sometimes, superior performance to existing sparse variational GP methods.
May-27-2019
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Japan > Honshū
- Kantō
- Kanagawa Prefecture (0.04)
- Tokyo Metropolis Prefecture > Tokyo (0.14)
- Kantō
- Middle East > Jordan (0.04)
- China > Beijing
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > California
- Los Angeles County > Long Beach (0.04)
- Canada > Ontario
- Asia
- Genre:
- Research Report > New Finding (0.87)
- Technology: