Regret Minimization Experience Replay in Off-Policy Reinforcement Learning
–Neural Information Processing Systems
In reinforcement learning, experience replay stores past samples for further reuse. Prioritized sampling is a promising technique to better utilize these samples. Previous criteria of prioritization include TD error, recentness and corrective feedback, which are mostly heuristically designed. In this work, we start from the regret minimization objective, and obtain an optimal prioritization strategy for Bellman update that can directly maximize the return of the policy. The theory suggests that data with higher hindsight TD error, better on-policiness and more accurate Q value should be assigned with higher weights during sampling. Thus most previous criteria only consider this strategy partially. We not only provide theoretical justifications for previous criteria, but also propose two new methods to compute the prioritization weight, namely ReMERN and ReMERT. ReMERN learns an error network, while ReMERT exploits the temporal ordering of states. Both methods outperform previous prioritized sampling algorithms in challenging RL benchmarks, including MuJoCo, Atari and Meta-World.
Neural Information Processing Systems
Nov-15-2025, 03:22:43 GMT
- Country:
- Asia
- China > Jiangsu Province
- Nanjing (0.04)
- Japan > Honshū
- Kansai > Osaka Prefecture > Osaka (0.04)
- China > Jiangsu Province
- Europe
- North America
- Puerto Rico > San Juan
- San Juan (0.04)
- United States
- California > Los Angeles County
- Long Beach (0.04)
- Colorado > Denver County
- Denver (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- California > Los Angeles County
- Puerto Rico > San Juan
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia
- Genre:
- Research Report (0.34)
- Industry:
- Education (0.34)
- Technology: