Off-Policy Evaluation with Out-of-Sample Guarantees
Ek, Sofia, Zachariah, Dave, Johansson, Fredrik D., Stoica, Petre
–arXiv.org Artificial Intelligence
We consider the problem of evaluating the performance of a decision policy using past observational data. The outcome of a policy is measured in terms of a loss (aka. disutility or negative reward) and the main problem is making valid inferences about its out-of-sample loss when the past data was observed under a different and possibly unknown policy. Using a sample-splitting method, we show that it is possible to draw such inferences with finite-sample coverage guarantees about the entire loss distribution, rather than just its mean. Importantly, the method takes into account model misspecifications of the past policy - including unmeasured confounding. The evaluation method can be used to certify the performance of a policy using observational data under a specified range of credible model assumptions.
arXiv.org Artificial Intelligence
Jun-30-2023
- Country:
- Europe (0.28)
- Genre:
- Research Report > Experimental Study (0.46)
- Industry:
- Health & Medicine (1.00)
- Technology: