Neighboring state-based RL Exploration

Cheng, Jeffery, Li, Kevin, Lin, Justin, Pachuca, Pedro

arXiv.org Artificial Intelligence 

Reinforcement Learning is a powerful tool to model decision-making processes. However, it relies on an exploration-exploitation trade-off that remains an open challenge for many tasks. In this work, we study neighboring state-based, modelfree exploration led by the intuition that, for an early-stage agent, considering actions derived from a bounded region of nearby states may lead to better actions when exploring. We propose two algorithms that choose exploratory actions based on a survey of nearby states, and find that one of our methods, ρ- explore, consistently outperforms the Double DQN baseline in an discrete environment by 49% in terms of Eval Reward Return. A popular area of recent study in Reinforcement Learning (RL) is that of exploration methods.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found