Model Selection for Off-policy Evaluation: New Algorithms and Experimental Protocol
Liu, Pai, Zhao, Lingfeng, Agarwal, Shivangi, Liu, Jinghan, Huang, Audrey, Amortila, Philip, Jiang, Nan
Holdout validation and hyperparameter tuning from data is a long-standing problem in offline reinforcement learning (RL). A standard framework is to use off-policy evaluation (OPE) methods to evaluate and select the policies, but OPE either incurs exponential variance (e.g., importance sampling) or has hyperparameters on their own (e.g., FQE and model-based). In this work we focus on hyperparameter tuning for OPE itself, which is even more under-investigated. Concretely, we select among candidate value functions ("model-free") or dynamics ("model-based") to best assess the performance of a target policy. Our contributions are two fold. We develop: (1) new model-free and model-based selectors with theoretical guarantees, and (2) a new experimental protocol for empirically evaluating them. Compared to the model-free protocol in prior works, our new protocol allows for more stable generation of candidate value functions, better control of misspecification, and evaluation of model-free and model-based methods alike. We exemplify the protocol on a Gym environment, and find that our new model-free selector, LSTD-Tournament, demonstrates promising empirical performance.
Feb-11-2025
- Country:
- Asia (0.14)
- North America > United States
- Illinois (0.14)
- Genre:
- Research Report > New Finding (0.46)
- Technology: