Goto

Collaborating Authors

 r-svrg


Variance reduction for Riemannian non-convex optimization with batch size adaptation

Han, Andi, Gao, Junbin

arXiv.org Machine Learning

Variance reduction techniques are popular in accelerating gradient descent and stochastic gradient descent for optimization problems defined on both Euclidean space and Riemannian manifold. In this paper, we further improve on existing variance reduction methods for non-convex Riemannian optimization, including R-SVRG and R-SRG/R-SPIDER with batch size adaptation. We show that this strategy can achieve lower total complexities for optimizing both general non-convex and gradient dominated functions under both finite-sum and online settings. As a result, we also provide simpler convergence analysis for R-SVRG and improve complexity bounds for R-SRG under finite-sum setting. Specifically, we prove that R-SRG achieves the same near-optimal complexity as R-SPIDER without requiring a small step size. Empirical experiments on a variety of tasks demonstrate effectiveness of proposed adaptive batch size scheme.


Riemannian stochastic variance reduced gradient

Sato, Hiroyuki, Kasai, Hiroyuki, Mishra, Bamdev

arXiv.org Machine Learning

Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large but finite number of loss functions. In this paper, we propose a novel Riemannian extension of the Euclidean stochastic variance reduced gradient algorithm (R-SVRG) to a manifold search space. The key challenges of averaging, adding, and subtracting multiple gradients are addressed with retraction and vector transport. We present a global convergence analysis of the proposed algorithm with a decay step size and a local convergence rate analysis under a fixed step size under some natural assumptions. The proposed algorithm is applied to problems on the Grassmann manifold, such as principal component analysis, low-rank matrix completion, and computation of the Karcher mean of subspaces, and outperforms the standard Riemannian stochastic gradient descent algorithm in each case.


Riemannian stochastic variance reduced gradient on Grassmann manifold

Kasai, Hiroyuki, Sato, Hiroyuki, Mishra, Bamdev

arXiv.org Machine Learning

Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large, but finite, number of loss functions. In this paper, we propose a novel Riemannian extension of the Euclidean stochastic variance reduced gradient algorithm (R-SVRG) to a compact manifold search space. To this end, we show the developments on the Grassmann manifold. The key challenges of averaging, addition, and subtraction of multiple gradients are addressed with notions like logarithm mapping and parallel translation of vectors on the Grassmann manifold. We present a global convergence analysis of the proposed algorithm with decay step-sizes and a local convergence rate analysis under fixed step-size with some natural assumptions. The proposed algorithm is applied on a number of problems on the Grassmann manifold like principal components analysis, low-rank matrix completion, and the Karcher mean computation. In all these cases, the proposed algorithm outperforms the standard Riemannian stochastic gradient descent algorithm.