Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update
Lee, Su Young, Sungik, Choi, Chung, Sae-Young
–Neural Information Processing Systems
We propose Episodic Backward Update (EBU) – a novel deep reinforcement learning algorithm with a direct value propagation. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate directly through all transitions of the sampled episode. We theoretically prove the convergence of the EBU method and experimentally demonstrate its performance in both deterministic and stochastic environments. Especially in 49 games of Atari 2600 domain, EBU achieves the same mean and median human normalized performance of DQN by using only 5% and 10% of samples, respectively. Papers published at the Neural Information Processing Systems Conference.
Neural Information Processing Systems
Mar-18-2020, 21:16:16 GMT
- Technology: