Goto

Collaborating Authors

 preconditioner





c1285fcadc52c0d3dc8813fc2c2e2b2a-AuthorFeedback.pdf

Neural Information Processing Systems

Our4 results certify that there exists an optimal linear pre-conditioner for quadratically convex constraint sets. As such,5 adaptivegradient methods can be minimax (rate) optimal. Inonline algorithms, the common practice [4,5,6,7,2]6 is to measure regret with respect to the "best" post-hoc regularizer (i.e. In this setting, the constraint set corresponds to the set of classifiers of interest, and the geometry of the gradients34 corresponds tothegeometry ofthefeatures (orcovariates). A generalized online mirror descent with applications to classification and52 regression.



A Non-asymptotic Analysis for Learning and Applying a Preconditioner in MCMC

Hird, Max, Maire, Florian, Negrea, Jeffrey

arXiv.org Machine Learning

Preconditioning is a common method applied to modify Markov chain Monte Carlo algorithms with the goal of making them more efficient. In practice it is often extremely effective, even when the preconditioner is learned from the chain. We analyse and compare the finite-time computational costs of schemes which learn a preconditioner based on the target covariance or the expected Hessian of the target potential with that of a corresponding scheme that does not use preconditioning. We apply our results to the Unadjusted Langevin Algorithm (ULA) for an appropriately regular target, establishing non-asymptotic guarantees for preconditioned ULA which learns its preconditioner. Our results are also applied to the unadjusted underdamped Langevin algorithm in the supplementary material. To do so, we establish non-asymptotic guarantees on the time taken to collect $N$ approximately independent samples from the target for schemes that learn their preconditioners under the assumption that the underlying Markov chain satisfies a contraction condition in the Wasserstein-2 distance. This approximate independence condition, that we formalize, allows us to bridge the non-asymptotic bounds of modern MCMC theory and classical heuristics of effective sample size and mixing time, and is needed to amortise the costs of learning a preconditioner across the many samples it will be used to produce.