Accelerating SGD for Highly Ill-Conditioned Huge-Scale Online Matrix Completion

Gavin Zhang, University of Illinois at Urbana–Champaign, jialun2@illinois.edu, "3026 Hong-Ming Chiu, University of Illinois at Urbana–Champaign, hmchiu2@illinois.edu, "3026 Richard Y. Zhang, University of Illinois at Urbana–Champaign, ryz@illinois.edu

Neural Information Processing Systems 

The matrix completion problem seeks to recover a d d ground truth matrix of low rank r d from observations of its individual elements. Real-world matrix completion is often a huge-scale optimization problem, with d so large that even the simplest full-dimension vector operations with O ( d) time complexity become prohibitively expensive. Stochastic gradient descent (SGD) is one of the few algorithms capable of solving matrix completion on a huge scale, and can also naturally handle streaming data over an evolving ground truth. Unfortunately, SGD experiences a dramatic slow-down when the underlying ground truth is ill-conditioned; it requires at least O ( κ log(1 /ϵ)) iterations to get ϵ -close to ground truth matrix with condition number κ. In this paper, we propose a preconditioned version of SGD that preserves all the favorable practical qualities of SGD for huge-scale online optimization while also making it agnostic to κ. For a symmetric ground truth and the Root Mean Square Error (RMSE) loss, we prove that the preconditioned SGD converges to ϵ -accuracy in O (log(1 /ϵ)) iterations, with a rapid linear convergence rate as if the ground truth were perfectly conditioned with κ = 1 . In our experiments, we observe a similar acceleration for item-item collaborative filtering on the MovieLens25M dataset via a pair-wise ranking loss, with 100 million training pairs and 10 million testing pairs.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found