Surge Phenomenon in Optimal Learning Rate and Batch Size Scaling
–Neural Information Processing Systems
In current deep learning tasks, Adam-style optimizers--such as Adam, Adagrad, RMSprop, Adafactor, and Lion--have been widely used as alternatives to SGDstyle optimizers. These optimizers typically update model parameters using the sign of gradients, resulting in more stable convergence curves. The learning rate and the batch size are the most critical hyperparameters for optimizers, which require careful tuning to enable effective convergence. Previous research has shown that the optimal learning rate increases linearly (or follows similar rules) with batch size for SGD-style optimizers. However, this conclusion is not applicable to Adam-style optimizers.
Neural Information Processing Systems
Mar-27-2025, 14:21:58 GMT
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (0.93)
- Research Report
- Industry:
- Education > Educational Setting > Online (0.46)
- Technology: