Preconditioning Matters: Fast Global Convergence of Non-convex Matrix Factorization via Scaled Gradient Descent
–Neural Information Processing Systems
Low-rank matrix factorization (LRMF) is a canonical problem in non-convex optimization, the objective function to be minimized is non-convex and even non-smooth, which makes the global convergence guarantee of gradient-based algorithm quite challenging. While the dependence of the convergence on the \textit{condition number} \kappa and \textit{small learning rate} makes it not practical especially for ill-conditioned LRMF problem.In this paper, we show that precondition helps in accelerating the convergence and prove that the scaled gradient descent (ScaledGD) and its variant, alternating scaled gradient descent (AltScaledGD) converge to an \varepsilon -global minima after O( {\rm ln} \frac{d}{\delta} {\rm ln} \frac{d}{\varepsilon}) iterations from general random initialization. Furthermore, we prove that as a proximity to the alternating minimization, AltScaledGD converges faster than ScaledGD, its global convergence does not rely on small learning rate and small initialization, which certificates the advantages of AltScaledGD in LRMF.
Neural Information Processing Systems
Jan-20-2025, 02:17:42 GMT
- Technology: