Scalable Log Determinants for Gaussian Process Kernel Learning
Kun Dong, David Eriksson, Hannes Nickisch, David Bindel, Andrew G. Wilson
–Neural Information Processing Systems
We propose novel O(n) approaches to estimating these quantities from only fast matrix vector multiplications (MVMs). These stochastic approximations are based on Chebyshev, Lanczos, and surrogate models, and converge quickly even for kernel matrices that have challenging spectra. We leverage these approximations to develop a scalable Gaussian process approach to kernel learning. We find that Lanczos is generally superior to Chebyshev for kernel learning, and that a surrogate approach can be highly efficient and accurate with popular kernels.
Neural Information Processing Systems
Oct-3-2024, 23:06:59 GMT