Exponentially convergent stochastic k-PCA without variance reduction
We show, both theoretically and empirically, that the algorithm naturally adapts to data low-rankness and converges exponentially fast to the ground-truth principal subspace. Notably, our result suggests that despite various recent efforts to accelerate the convergence of stochastic-gradient based methods by adding a O(n)-time variance reduction step, for the k-PCA problem, a truly online SGD variant suffices to achieve exponential convergence on intrinsically low-rank data.
Apr-2-2019
- Country:
- North America > United States > New York > New York County > New York City (0.14)
- Genre:
- Research Report > New Finding (0.54)
- Technology: