Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism Banghua Zhu Department of EECS Department of EECS UC Berkeley
–Neural Information Processing Systems
Offline (or batch) reinforcement learning (RL) algorithms seek to learn an optimal policy from a fixed dataset without active data collection. Based on the composition of the offline dataset, two main methods are used: imitation learning which is suitable for expert datasets, and vanilla offline RL which often requires uniform coverage datasets. From a practical standpoint, datasets often deviate from these two extremes and the exact data composition is usually unknown. To bridge this gap, we present a new offline RL framework that smoothly interpolates between the two extremes of data composition, hence unifying imitation learning and vanilla offline RL. The new framework is centered around a weak version of the concentrability coefficient that measures the deviation of the behavior policy from the expert policy alone.
Neural Information Processing Systems
May-29-2025, 00:48:56 GMT
- Country:
- North America > United States (0.29)
- Industry:
- Education > Educational Setting
- Higher Education (0.40)
- Leisure & Entertainment > Games (0.46)
- Education > Educational Setting
- Technology: