CLARE: Conservative Model-Based Reward Learning for Offline Inverse Reinforcement Learning
Yue, Sheng, Wang, Guanbo, Shao, Wei, Zhang, Zhaofeng, Lin, Sen, Ren, Ju, Zhang, Junshan
–arXiv.org Artificial Intelligence
This work aims to tackle a major challenge in offline Inverse Reinforcement Learning (IRL), namely the reward extrapolation error, where the learned reward function may fail to explain the task correctly and misguide the agent in unseen environments due to the intrinsic covariate shift. Leveraging both expert data and lower-quality diverse data, we devise a principled algorithm (namely CLARE) that solves offline IRL efficiently via integrating "conservatism" into a learned reward function and utilizing an estimated dynamics model. Our theoretical analysis provides an upper bound on the return gap between the learned policy and the expert policy, based on which we characterize the impact of covariate shift by examining subtle two-tier tradeoffs between the "exploitation" (on both expert and diverse data) and "exploration" (on the estimated dynamics model). We show that CLARE can provably alleviate the reward extrapolation error by striking the right "exploitation-exploration" balance therein. Extensive experiments corroborate the significant performance gains of CLARE over existing state-of-the-art algorithms on MuJoCo continuous control tasks (especially with a small offline dataset), and the learned reward is highly instructive for further learning (source code). The primary objective of Inverse Reinforcement Learning (IRL) is to learn a reward function from demonstrations (Arora & Doshi, 2021; Russell, 1998). In general, conventional IRL methods rely on extensive online trials and errors that can be costly or require a fully known transition model (Abbeel & Ng, 2004; Ratliff et al., 2006; Ziebart et al., 2008; Syed & Schapire, 2007; Boularias et al., 2011; Osa et al., 2018), struggling to scale in many real-world applications. To tackle this problem, this paper studies offline IRL, with focus on learning from a previously collected dataset without online interaction with the environment.
arXiv.org Artificial Intelligence
Feb-20-2023
- Country:
- Asia > China (0.04)
- North America > United States
- Arizona (0.04)
- California > Yolo County
- Davis (0.04)
- Ohio (0.04)
- Genre:
- Research Report > New Finding (0.46)
- Technology: