OGBench: Benchmarking Offline Goal-Conditioned RL
Park, Seohong, Frans, Kevin, Eysenbach, Benjamin, Levine, Sergey
–arXiv.org Artificial Intelligence
Offline goal-conditioned reinforcement learning (GCRL) is a major problem in reinforcement learning (RL) because it provides a simple, unsupervised, and domain-agnostic way to acquire diverse behaviors and representations from unlabeled data without rewards. Despite the importance of this setting, we lack a standard benchmark that can systematically evaluate the capabilities of offline GCRL algorithms. In this work, we propose OGBench, a new, high-quality benchmark for algorithms research in offline goal-conditioned RL. OGBench consists of 8 types of environments, 85 datasets, and reference implementations of 6 representative offline GCRL algorithms. We have designed these challenging and realistic environments and datasets to directly probe different capabilities of algorithms, such as stitching, long-horizon reasoning, and the ability to handle high-dimensional inputs and stochasticity. While representative algorithms may rank similarly on prior benchmarks, our experiments reveal stark strengths and weaknesses in these different capabilities, providing a strong foundation for building new algorithms. Project page: https://seohong.me/projects/ogbench
arXiv.org Artificial Intelligence
Feb-13-2025
- Country:
- North America > United States (0.45)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Leisure & Entertainment > Sports (0.46)
- Technology: