Exponentially convergent stochastic k-PCA without variance reduction
–Neural Information Processing Systems
We show, both theoretically and empirically, that the algorithm naturally adapts to data lowrankness and converges exponentially fast to the ground-truth principal subspace. Notably, our result suggests that despite various recent efforts to accelerate the convergence of stochastic-gradient based methods by adding a O(n)-time variance reduction step, for the k-PCA problem, a truly online SGD variant suffices to achieve exponential convergence on intrinsically low-rank data.
Neural Information Processing Systems
Mar-23-2025, 04:01:25 GMT
- Country:
- North America > United States > New York (0.28)
- Genre:
- Research Report > New Finding (0.55)
- Technology: