LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain
Allen-Zhu, Zeyuan, Li, Yuanzhi
–Neural Information Processing Systems
We study k-SVD that is to obtain the first k singular vectors of a matrix A. Recently, a few breakthroughs have been discovered on k-SVD: Musco and Musco [1] proved the first gap-free convergence result using the block Krylov method, Shamir [2] discovered the first variance-reduction stochastic method, and Bhojanapalli et al. [3] provided the fastest $O(\mathsf{nnz}(A) + \mathsf{poly}(1/\varepsilon))$-time algorithm using alternating minimization. In this paper, we put forward a new and simple LazySVD framework to improve the above breakthroughs. This framework leads to a faster gap-free method outperforming [1], and the first accelerated and stochastic method outperforming [2]. In the $O(\mathsf{nnz}(A) + \mathsf{poly}(1/\varepsilon))$ running-time regime, LazySVD outperforms [3] in certain parameter regimes without even using alternating minimization.
Neural Information Processing Systems
Dec-31-2016
- Country:
- Europe > Spain
- Catalonia > Barcelona Province > Barcelona (0.04)
- North America > United States
- California > Santa Clara County
- Palo Alto (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- California > Santa Clara County
- Europe > Spain
- Technology: