Goto

Collaborating Authors

 optimizer




Appendix 367 A Implementation Details

Neural Information Processing Systems

W e are also committed to releasing the code. Implementation details for Stage 2. Our implementation strictly follows the previous work that also In this section, we briefly introduce our tasks. It requires the robot hand to open the door on the table. It requires the robot hand to orient the pen to the target orientation. It requires the robot hand to place the object on the table into the mug. We present the success rates of our six task categories as in Table 1.



Resetting the Optimizer in Deep RL: An Empirical Study

Neural Information Processing Systems

We focus on the task of approximating the optimal value function in deep reinforcement learning. This iterative process is comprised of solving a sequence of optimization problems where the loss function changes per iteration. The common approach to solving this sequence of problems is to employ modern variants of the stochastic gradient descent algorithm such as Adam. These optimizers maintain their own internal parameters such as estimates of the first-order and the second-order moments of the gradient, and update them over time. Therefore, information obtained in previous iterations is used to solve the optimization problem in the current iteration. We demonstrate that this can contaminate the moment estimates because the optimization landscape can change arbitrarily from one iteration to the next one. To hedge against this negative effect, a simple idea is to reset the internal parameters of the optimizer when starting a new iteration. We empirically investigate this resetting idea by employing various optimizers in conjunction with the Rainbow algorithm. We demonstrate that this simple modification significantly improves the performance of deep RL on the Atari benchmark.





Appendix A Control algorithm The action-value function can be decomposed into two components as: Q (PT) (s, a) = Q (P) (s, a) + Q (T) w

Neural Information Processing Systems

We use induction to prove this statement. The penultimate step follows from the induction hypothesis completing the proof. Then, the fixed point of Eq.(5) is the value function of in f M . We focus on permanent value function in the next two theorems. The permanent value function is updated using Eq.