Goto

Collaborating Authors

 adam



Adaptive Methods for Nonconvex Optimization

Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, Sanjiv Kumar

Neural Information Processing Systems

The first prominent algorithms in this line of research isADAGRAD [7,22], which uses a per-dimension learning rate based on squared pastgradients.ADAGRADachievedsignificant performance gainsincomparison toSGDwhenthe gradientsaresparse.



OnConvergenceofAdamforStochasticOptimization underRelaxedAssumptions

Neural Information Processing Systems

Wealsoprovidea probabilistic convergence result for Adam under a generalized smooth condition which allows unbounded smoothness parameters and has been illustrated empirically to capture the smooth property of many practical objective functions more accurately.


Adam with Bandit Sampling for Deep Learning

Neural Information Processing Systems

Adam is a widely used optimization method for training deep learning models. It computes individual adaptive learning rates for different parameters. In this paper, we propose a generalization of Adam, called Adambs, that allows us to also adapt to different training examples based on their importance in the model's convergence. To achieve this, we maintain a distribution over all examples, selecting a mini-batch in each iteration by sampling according to this distribution, which we update using a multi-armed bandit algorithm. This ensures that examples that are more beneficial to the model training are sampled with higher probabilities.


The coldest body temperatures humans have survived

Popular Science

In some remarkable cases, people have survived after their core temperature has plummeted into the 50s. The human body needs to maintain the same internal body temperature or else many vital systems fall apart. Breakthroughs, discoveries, and DIY tips sent every weekday. Whether you prefer sweltering summers or frigid winters, significant temperature changes mean only one thing to your body: bad news. Humans are homeotherms, meaning that our core body temperature stays roughly constant.



Effective continuous equations for adaptive SGD: a stochastic analysis view

Callisti, Luca, Romito, Marco, Triggiano, Francesco

arXiv.org Machine Learning

We present a theoretical analysis of some popular adaptive Stochastic Gradient Descent (SGD) methods in the small learning rate regime. Using the stochastic modified equations framework introduced by Li et al., we derive effective continuous stochastic dynamics for these methods. Our key contribution is that sampling-induced noise in SGD manifests in the limit as independent Brownian motions driving the parameter and gradient second momentum evolutions. Furthermore, extending the approach of Malladi et al., we investigate scaling rules between the learning rate and key hyperparameters in adaptive methods, characterising all non-trivial limiting dynamics.



Appendix A Versatility of the neuron model In our neuron model, depending on the decay coefficients

Neural Information Processing Systems

The SRM-based back-propagation can be summarized using the relationship between the potentials as follows. Hyper-parameters used for loss landscape estimation (Section 3.4) and random spike-train matching Some of the hyper-parameters were not mentioned in the paper. Table A1: Hyper-parameters used for loss landscape estimation (Section 3.4) and random spike-train matching