Exponentially convergent stochastic k-PCA without variance reduction
We show, both theoretically and empirically, that the algorithm naturally adapts to data low-rankness and converges exponentially fast to the ground-truth principal subspace. Notably, our result suggests that despite various recent efforts to accelerate the convergence of stochastic-gradient based methods by adding a O(n)-time variance reduction step, for the k-PCA problem, a truly online SGD variant suffices to achieve exponential convergence on intrinsically low-rank data.
Apr-2-2019
- Country:
- Asia > Russia (0.04)
- Europe
- Russia (0.04)
- Spain
- Andalusia > Cádiz Province
- Cadiz (0.04)
- Canary Islands (0.04)
- Andalusia > Cádiz Province
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- Maryland > Baltimore (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- New York > New York County
- New York City (0.14)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Canada > Quebec
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Genre:
- Research Report > New Finding (0.54)
- Technology: