Scalable Robust Matrix Factorization with Nonconvex Loss

Quanming Yao, James Kwok

Neural Information Processing Systems 

Moreover, even the state-of-the-art RMF solver (RMF-MM) is slow and cannot utilize data sparsity. In this paper, we propose to improve robustness by using nonconvex loss functions. The resultant optimization problem is difficult. To improve efficiency and scalability, we use majorization-minimization (MM) and optimize the MM surrogate by using the accelerated proximal gradient algorithm on its dual problem. Data sparsity can also be exploited. The resultant algorithm has low time and space complexities, and is guaranteed to converge to a critical point. Extensive experiments show that it outperforms the state-of-the-art in terms of both accuracy and speed.