Adam on Local Time: Addressing Nonstationarity in RL with Relative Adam Timesteps
–Neural Information Processing Systems
In reinforcement learning (RL), it is common to apply techniques used broadly in machine learning such as neural network function approximators and momentumbased optimizers [1, 2]. However, such tools were largely developed for supervised learning rather than nonstationary RL, leading practitioners to adopt target networks [3], clipped policy updates [4], and other RL-specific implementation tricks [5, 6] to combat this mismatch, rather than directly adapting this toolchain for use in RL. In this paper, we take a different approach and instead address the effect of nonstationarity by adapting the widely used Adam optimiser [7]. We first analyse the impact of nonstationary gradient magnitude--such as that caused by a change in target network--on Adam's update size, demonstrating that such a change can lead to large updates and hence sub-optimal performance. To address this, we introduce Adam with Relative Timesteps, or Adam-Rel. Rather than using the global timestep in the Adam update, Adam-Rel uses the local timestep within an epoch, essentially resetting Adam's timestep to 0 after target changes. We demonstrate that this avoids large updates and reduces to learning rate annealing in the absence of such increases in gradient magnitude. Evaluating Adam-Rel in both on-policy and off-policy RL, we demonstrate improved performance in both Atari and Craftax. We then show that increases in gradient norm occur in RL in practice, and examine the differences between our theoretical model and the observed data.
Neural Information Processing Systems
Mar-27-2025, 14:52:59 GMT
- Country:
- Europe
- Austria > Vienna (0.14)
- United Kingdom > England
- Oxfordshire > Oxford (0.14)
- Europe
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (0.68)
- Research Report
- Industry:
- Information Technology (0.93)
- Technology: