Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data
–Neural Information Processing Systems
Offline reinforcement learning (RL) can be used to improve future performance by leveraging historical data. There exist many different algorithms for offline RL, and it is well recognized that these algorithms, and their hyperparameter settings, can lead to decision policies with substantially differing performance. This prompts the need for pipelines that allow practitioners to systematically perform algorithmhyperparameter selection for their setting. Critically, in most real-world settings, this pipeline must only involve the use of historical data. Inspired by statistical model selection methods for supervised learning, we introduce a task-and methodagnostic pipeline for automatically training, comparing, selecting, and deploying the best policy when the provided dataset is limited in size.
Neural Information Processing Systems
Jun-2-2025, 12:58:52 GMT
- Country:
- Asia > Middle East > Israel (0.14)
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (0.67)
- Research Report
- Industry:
- Education (1.00)
- Health & Medicine (1.00)
- Technology: