Goto

Collaborating Authors

 optimisation


Inverse-Free Sparse Variational Gaussian Processes

Cortinovis, Stefano, Aitchison, Laurence, Eleftheriadis, Stefanos, van der Wilk, Mark

arXiv.org Machine Learning

Gaussian processes (GPs) offer appealing properties but are costly to train at scale. Sparse variational GP (SVGP) approximations reduce cost yet still rely on Cholesky decompositions of kernel matrices, ill-suited to low-precision, massively parallel hardware. While one can construct valid variational bounds that rely only on matrix multiplications (matmuls) via an auxiliary matrix parameter, optimising them with off-the-shelf first-order methods is challenging. We make the inverse-free approach practical by proposing a better-conditioned bound and deriving a matmul-only natural-gradient update for the auxiliary parameter, markedly improving stability and convergence. We further provide simple heuristics, such as step-size schedules and stopping criteria, that make the overall optimisation routine fit seamlessly into existing workflows. Across regression and classification benchmarks, we demonstrate that our method 1) serves as a drop-in replacement in SVGP-based models (e.g., deep GPs), 2) recovers similar performance to traditional methods, and 3) can be faster than baselines when well tuned.




Optimistic Meta-Gradients

Neural Information Processing Systems

We study the connection between gradient-based meta-learning and convex optimisation. We observe that gradient descent with momentum is a special case of meta-gradients, and building on recent results in optimisation, we prove convergence rates for meta-learning in the single task setting.







DeepSetPredictionNetworks

Neural Information Processing Systems

Concretely,wecontributethefollowing: 1. Wepropose amodel (section 3, Algorithm 1) that can predict aset from afeature vector (vector-to-set) while properly taking the structure of sets into account.