Goto

Collaborating Authors

 van hasselt






OntheEstimationBiasinDoubleQ-Learning

Neural Information Processing Systems

One of the phenomena of interest is that Q-learning (Watkins, 1989) is known to suffer from overestimation issues, since it takes a maximum operator overaset ofestimated action-values.


Learning values across many orders of magnitude

Hado P. van Hasselt, Arthur Guez, Arthur Guez, Matteo Hessel, Volodymyr Mnih, David Silver

Neural Information Processing Systems

Most learning algorithms are not invariant to the scale of the signal that is being approximated. We propose to adaptively normalize the targets used in the learning updates. This is important in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior.




Forethought_and_Hindsight_in_Credit_Assignment__Camera_Ready_ (3).pdf

Neural Information Processing Systems

Credit assignment, i.e. determining how to correctly associate delayed rewards with states or state-action pairs, is a crucial problem for reinforcement learning (RL) agents ( Sutton and Barto, 2018).