Learning with Incremental Iterative Regularization
Rosasco, Lorenzo, Villa, Silvia
–Neural Information Processing Systems
Within a statistical learning setting, we propose and study an iterative regularization algorithmfor least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal consistency, i.e. almost sure convergence of the risk, as well as sharp finite sample bounds for the iterates. Our results are a step towards understanding the effect of multiple epochs in stochastic gradient techniques in machine learning and rely on integrating statistical and optimization results.
Neural Information Processing Systems
Dec-31-2015
- Country:
- Asia > Russia (0.04)
- Europe
- Italy (0.04)
- Russia (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- Massachusetts > Middlesex County
- Cambridge (0.04)
- New York (0.05)
- Wisconsin (0.04)
- Massachusetts > Middlesex County
- Canada > Ontario
- Genre:
- Research Report > New Finding (0.49)
- Technology: