AdaMemento: Adaptive Memory-Assisted Policy Optimization for Reinforcement Learning

Yan, Renye, Gan, Yaozhong, Wu, You, Xing, Junliang, Liangn, Ling, Zhu, Yeshang, Cai, Yimao

arXiv.org Artificial Intelligence 

A BSTRACT In sparse reward scenarios of reinforcement learning (RL), the memory mechanism provides promising shortcuts to policy optimization by reflecting on past experiences like humans. However, current memory-based RL methods simply store and reuse high-value policies, lacking a deeper refining and filtering of diverse past experiences and hence limiting the capability of memory. In this paper, we propose AdaMemento, an adaptive memory-enhanced RL framework. Instead of just memorizing positive past experiences, we design a memory-reflection module that exploits both positive and negative experiences by learning to predict known local optimal policies based on real-time states. To effectively gather informative trajectories for the memory, we further introduce a fine-grained intrinsic motivation paradigm, where nuances in similar states can be precisely distinguished to guide exploration. The exploitation of past experiences and exploration of new policies are then adaptively coordinated by ensemble learning to approach the global optimum. Furthermore, we theoretically prove the superiority of our new intrinsic motivation and ensemble mechanism. From 59 quantitative and visualization experiments, we confirm that AdaMemento can distinguish subtle states for better exploration and effectively exploiting past experiences in memory, achieving significant improvement over previous methods. However, in sparse reward environments, policy updates become unstable and ineffective due to insufficient feedback (Bellemare et al., 2016; Liang et al., 2018). This significantly increases the difficulty of learning effective long-horizon policies. Memory offers a promising solution to the sparse reward problem, as humans can effectively learn from past experiences to avoid repeating mistakes in similar scenarios (Liu et al., 2021; Bransford & Johnson, 1972; Andrychowicz et al., 2017). Through memory, agents can utilize prior successful experiences to refine their policies in complex environments, hence reducing the reliance on dense reward feedback and improving both learning efficiency and policy stability (Pathak et al., 2017). Existing memory-based RL methods can be roughly categorized into two classes.