Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
Dodd, Daniel, Sharrock, Louis, Nemeth, Christopher
–arXiv.org Artificial Intelligence
In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning-rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.
arXiv.org Artificial Intelligence
Jun-4-2024
- Country:
- Europe > Austria
- Vienna (0.14)
- North America > United States
- Rhode Island (0.14)
- Europe > Austria
- Genre:
- Research Report (0.63)
- Technology: