adaptive method good
Why are Adaptive Methods Good for Attention Models?
While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across important tasks, such as attention models. The settings under which SGD performs poorly in comparison to adaptive methods are not well understood yet. In this paper, we provide empirical and theoretical evidence that a heavy-tailed distribution of the noise in stochastic gradients is one cause of SGD's poor performance. We provide the first tight upper and lower convergence bounds for adaptive gradient methods under heavy-tailed noise. Further, we demonstrate how gradient clipping plays a key role in addressing heavy-tailed gradient noise. Subsequently, we show how clipping can be applied in practice by developing an adaptive coordinate-wise clipping algorithm (ACClip) and demonstrate its superior performance on BERT pretraining and finetuning tasks.
Review for NeurIPS paper: Why are Adaptive Methods Good for Attention Models?
Summary and Contributions: The paper studies the behavior of SGD, Adam, and SGD with clipping on the stochastic optimization problems with heavy-tailed stochastic gradients. First of all, the authors empirically establish that Adam outperforms SGD on the problems with heavy-tailed stochastic gradients. Next, they derive the convergence guarantees for clipped SGD for smooth non-convex under the assumption of the uniformly bounded central moment of order \alpha \in (1,2] of the gradient and non-smooth (authors claim that f should be L-smooth in the statement of the theorem, but do not use it in the proof) strongly convex problems under the assumption of the uniformly bounded moment of order \alpha \in (1,2] of the gradient. Interestingly, in these cases, SGD can diverge, which fits the empirical evidence that methods with clipping (or its adaptive variants) work better than SGD in the presence of heavy-tailed noise. Furthermore, the paper proposes lower bounds for these cases implying the optimality of clipped SGD.
Review for NeurIPS paper: Why are Adaptive Methods Good for Attention Models?
There was a reasonable amount of discussion about this paper. The author feedback clarified a variety of issues which caused some reviewers to increase their scores, while some of the discussion caused other reviewers to decrease their scores. Although there was one holdout, the majority of the reviewers leaned towards rejection of the paper. However, I believe this is one of the rare cases where the AC should recommend for the PC to accept the paper against the recommendations of the reviewers. The main reason I'm recommending acceptance is due to the broader context and potential impact of the paper.
Why are Adaptive Methods Good for Attention Models?
While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across important tasks, such as attention models. The settings under which SGD performs poorly in comparison to adaptive methods are not well understood yet. In this paper, we provide empirical and theoretical evidence that a heavy-tailed distribution of the noise in stochastic gradients is one cause of SGD's poor performance. We provide the first tight upper and lower convergence bounds for adaptive gradient methods under heavy-tailed noise. Further, we demonstrate how gradient clipping plays a key role in addressing heavy-tailed gradient noise.