Double Clipping: Less-Biased Variance Reduction in Off-Policy Evaluation

Lichtenberg, Jan Malte, Buchholz, Alexander, Di Benedetto, Giuseppe, Ruffini, Matteo, London, Ben

arXiv.org Artificial Intelligence 

They allow us to estimate the performance of a new target recommendation policy based on interaction data logged from a different logging policy (for instance, the current production recommender), thereby reducing the need to run slow and costly A/B tests. Many counterfactual off-policy estimators are based on the inverse propensity scoring (IPS) principle [2, 6, 7, 12]. Given a stochastic logging policy and some mild assumptions, IPS-based estimators are unbiased, but often suffer from high variance. This is true even on industrial-scale data sizes; in particular, if the logging policy is close to being deterministic.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found