Towards Assessing and Benchmarking Risk-Return Tradeoff of Off-Policy Evaluation

Kiyohara, Haruka, Kishimoto, Ren, Kawakami, Kosuke, Kobayashi, Ken, Nakata, Kazuhide, Saito, Yuta

arXiv.org Artificial Intelligence 

Off-Policy Evaluation (OPE) aims to assess the effectiveness of counterfactual policies using only offline logged data and is often used to identify the top-k promising policies for deployment in online A/B tests. Existing evaluation metrics for OPE estimators primarily focus on the "accuracy" of OPE or that of downstream policy selection, neglecting risk-return tradeoff in the subsequent online policy deployment. To address this issue, we draw inspiration from portfolio evaluation in finance and develop a new metric, called SharpeRatio@k, which measures the risk-return tradeoff of policy portfolios formed by an OPE estimator under varying online evaluation budgets (k). We validate our metric in two example scenarios, demonstrating its ability to effectively distinguish between low-risk and high-risk estimators and to accurately identify the most efficient estimator. This efficient estimator is characterized by its capability to form the most advantageous policy portfolios, maximizing returns while minimizing risks during online deployment, a nuance that existing metrics typically overlook. These experiments offer several interesting directions and suggestions for future OPE research. Reinforcement Learning (RL) has achieved considerable success in a variety of applications requiring sequential decision-making. Nonetheless, its online learning approach is often seen as problematic due to the need for active interaction with the environment, which can be risky, time-consuming, and unethical (Fu et al., 2021; Matsushima et al., 2021). To mitigate these issues, learning new policies offline from existing historical data, known as Offline RL (Levine et al., 2020), is becoming increasingly popular for real-world applications (Qin et al., 2021). Typically, in the offline RL lifecycle, promising candidate policies are initially screened through Off-Policy Evaluation (OPE) (Fu et al., 2020), followed by the selection of the final production policy from the shortlisted candidates using more dependable online A/B tests (Kurenkov & Kolesnikov, 2022), as shown in Figure 1. When evaluating the efficacy of OPE methods, research has largely concentrated on "accuracy" metrics like mean-squared error (MSE) (Uehara et al., 2022; Voloshin et al., 2019), rank correlation (rankcorr) (Fu et al., 2021; Paine et al., 2020), and regret in subsequent policy selection (Doroudi et al., 2017; Tang & Wiens, 2021). However, these existing metrics do not adequately assess the balance between risk and return experienced by an estimator during the online deployment of selected policies. Crucially, MSE and rankcorr fall short in distinguishing whether an estimator is underevaluating nearoptimal policies or overevaluating poor-performing ones, which influence the risk-return dynamics in OPE and policy selection in different ways.