Episodic Curiosity through Reachability

Savinov, Nikolay, Raichuk, Anton, Marinier, Raphaël, Vincent, Damien, Pollefeys, Marc, Lillicrap, Timothy, Gelly, Sylvain

arXiv.org Artificial Intelligence 

Rewards are sparse in the real world and most today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself -- thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward -- making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory -- which incorporates rich information about environment dynamics. This allows us to overcome the known "couch-potato" issues of prior work -- when the agent finds a way to instantly gratify itself by exploiting actions which lead to unpredictable consequences. We test our approach in visually rich 3D environments in VizDoomand DMLab. In VizDoom, our agent learns to successfully navigate to a distant goal at least 2 times faster than the state-of-the-art curiosity method ICM. In DMLab, our agent generalizes well to new procedurally generated levels of the game -- reaching the goal at least 2 times more frequently than ICM on test mazes with very sparse reward. Many real-world tasks have sparse rewards. For example, animals searching for food may need to go many miles without any reward from the environment. Multiple approaches were proposed to achieve better explorative policies. One way is to give a reward bonus which facilitates exploration by rewarding novel observations. The reward bonus is summed up with the original task reward and optimized by standard RL algorithms. Such an approach is motivated by neuroscience studies of animals: an animal has an ability to reward itself for something novel - the mechanism biologically built into its dopamine release system.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found