Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning
–Neural Information Processing Systems
We consider the minimization of a convex objective function defined on a Hilbert space, which is only available through unbiased estimates of its gradients. This problem includes standard machine learning algorithms such as kernel logistic regression and least-squares regression, and is commonly referred to as a stochastic approximation problem in the operations research community. We provide a non-asymptotic analysis of the convergence of two well-known algorithms, stochastic gradient descent (a.k.a. Robbins-Monro algorithm) as well as a simple modification where iterates are averaged (a.k.a.
Neural Information Processing Systems
Mar-15-2024, 01:43:48 GMT
- Genre:
- Research Report
- Experimental Study (0.35)
- New Finding (0.36)
- Research Report
- Industry:
- Education (0.46)
- Technology: