Gradient Descent Meets Shift-and-Invert Preconditioning for Eigenvector Computation
–Neural Information Processing Systems
Shift-and-invert preconditioning, as a classic acceleration technique for the leading eigenvector computation, has received much attention again recently, owing to fast least-squares solvers for efficiently approximating matrix inversions in power iterations. In this work, we adopt an inexact Riemannian gradient descent perspective to investigate this technique on the effect of the step-size scheme. The shift-and-inverted power method is included as a special case with adaptive step-sizes. Particularly, two other step-size settings, i.e., constant step-sizes and Barzilai-Borwein (BB) step-sizes, are examined theoretically and/or empirically. Our experimental studies show that the proposed algorithm can be significantly faster than the shift-and-inverted power method in practice.
Neural Information Processing Systems
Feb-15-2020, 19:27:52 GMT