GCHR : Goal-Conditioned Hindsight Regularization for Sample-Efficient Reinforcement Learning
Lei, Xing, Yang, Wenyan, Ke, Kaiqiang, Yang, Shentao, Zhang, Xuetao, Pajarinen, Joni, Wang, Donglin
–arXiv.org Artificial Intelligence
Goal-conditioned reinforcement learning (GCRL) with sparse rewards remains a fundamental challenge in reinforcement learning. While hindsight experience replay (HER) has shown promise by relabeling collected trajectories with achieved goals, we argue that trajectory relabeling alone does not fully exploit the available experiences in off-policy GCRL methods, resulting in limited sample efficiency. In this paper, we propose Hindsight Goal-conditioned Regularization (HGR), a technique that generates action regularization priors based on hindsight goals. When combined with hindsight self-imitation regularization (HSR), our approach enables off-policy RL algorithms to maximize experience utilization. Compared to existing GCRL methods that employ HER and self-imitation techniques, our hindsight regularizations achieve substantially more efficient sample reuse and the best performances, which we empirically demonstrate on a suite of navigation and manipulation tasks.
arXiv.org Artificial Intelligence
Aug-11-2025
- Country:
- Asia > China
- Shaanxi Province > Xi'an (0.04)
- North America > United States
- Texas > Travis County > Austin (0.04)
- Asia > China
- Genre:
- Research Report > New Finding (0.67)
- Technology: