The Marginal Value of Adaptive Gradient Methods in Machine Learning
Wilson, Ashia C., Roelofs, Rebecca, Stern, Mitchell, Srebro, Nati, Recht, Benjamin
–Neural Information Processing Systems
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models.
Neural Information Processing Systems
Feb-14-2020, 14:44:03 GMT
- Technology: