Improving On-policy Learning with Statistical Reward Accumulation
Deng, Yubin, Yu, Ke, Lin, Dahua, Tang, Xiaoou, Loy, Chen Change
–arXiv.org Artificial Intelligence
Deep reinforcement learning has obtained significant breakthroughs in recent years. Most methods in deep-RL achieve good results via the maximization of the reward signal provided by the environment, typically in the form of discounted cumulative returns. Such reward signals represent the immediate feedback of a particular action performed by an agent. However, tasks with sparse reward signals are still challenging to on-policy methods. In this paper, we introduce an effective characterization of past reward statistics (which can be seen as long-term feedback signals) to supplement this immediate reward feedback. In particular, value functions are learned with multi-critics supervision, enabling complex value functions to be more easily approximated in on-policy learning, even when the reward signals are sparse. We also introduce a novel exploration mechanism called "hot-wiring" that can give a boost to seemingly trapped agents. We demonstrate the effectiveness of our advantage actor multi-critic (A2MC) method across the discrete domains in Atari games as well as continuous domains in the MuJoCo environments. A video demo is provided at https://youtu.be/zBmpf3Yz8tc.
arXiv.org Artificial Intelligence
Sep-7-2018
- Genre:
- Research Report (0.50)
- Industry:
- Leisure & Entertainment > Games > Computer Games (0.70)
- Technology: