Goal Exploration via Adaptive Skill Distribution for Goal-Conditioned Reinforcement Learning
–arXiv.org Artificial Intelligence
Exploration efficiency poses a significant challenge in goal-conditioned reinforcement learning (GCRL) tasks, particularly those with long horizons and sparse rewards. A primary limitation to exploration efficiency is the agent's inability to leverage environmental structural patterns. In this study, we introduce a novel framework, GEASD, designed to capture these patterns through an adaptive skill distribution during the learning process. This distribution optimizes the local entropy of achieved goals within a contextual horizon, enhancing goal-spreading behaviors and facilitating deep exploration in states containing familiar structural patterns. Our experiments reveal marked improvements in exploration efficiency using the adaptive skill distribution compared to a uniform skill distribution. Additionally, the learned skill distribution demonstrates robust generalization capabilities, achieving substantial exploration progress in unseen tasks containing similar local structures.
arXiv.org Artificial Intelligence
Apr-19-2024
- Country:
- Asia > Middle East > Qatar (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.66)
- Technology: