How Should an Agent Practice?
Rajendran, Janarthanan, Lewis, Richard, Veeriah, Vivek, Lee, Honglak, Singh, Satinder
–arXiv.org Artificial Intelligence
We present a method for learning intrinsic reward functions to drive the learning of an agent during periods of practice in which extrinsic task rewards are not available. During practice, the environment may differ from the one available for training and evaluation with extrinsic rewards. We refer to this setup of alternating periods of practice and objective evaluation as practice-match, drawing an analogy to regimes of skill acquisition common for humans in sports and games. The agent must effectively use periods in the practice environment so that performance improves during matches. In the proposed method the intrinsic practice reward is learned through a meta-gradient approach that adapts the practice reward parameters to reduce the extrinsic match reward loss computed from matches. We illustrate the method on a simple grid world, and evaluate it in two games in which the practice environment differs from match: Pong with practice against a wall without an opponent, and PacMan with practice in a maze without ghosts. The results show gains from learning in practice in addition to match periods over learning in matches only. Introduction There are many applications of reinforcement learning (RL) in which the natural formulation of the reward function gives rise to difficult computational challenges, or in which the reward itself is unavailable for extended periods of time or is difficult to specify. These include settings with very sparse or delayed reward, multiple tasks or goals, reward uncertainty, and learning in the absence of reward or in advance of unknown future reward. A range of approaches address these challenges through reward design, providing intrinsic rewards to the agent that augment or replace the objective or extrinsic reward. The aim is to provide useful and proximal learning signals that drive behavior and learning in a way that improves performance on the main objective of interest (Ng, Harada, and Russell 1999; Barto, Singh, and Chentanez 2004; Singh et al. 2010). These intrinsic rewards are often hand-engineered, and based on either task-specific reward features developed from domain analysis, or task-general reward features, sometimes inspired by intrinsic mo-Copyright c null 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org).
arXiv.org Artificial Intelligence
Dec-15-2019
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Leisure & Entertainment > Games (0.94)
- Technology: