From Importance Sampling to Doubly Robust Policy Gradient

Huang, Jiawei, Jiang, Nan

arXiv.org Machine Learning 

We show that policy gradient (PG) and its variance reduction variants can be derived by taking finite difference of function evaluations supplied by estimators from the importance sampling (IS) family for off-policy evaluation (OPE). Starting from the doubly robust (DR) estimator [Jiang and Li, 2016], we provide a simple derivation of a very general and flexible form of PG, which subsumes the state-of-the-art variance reduction technique [Cheng et al., 2019] as its special case and immediately hints at further variance reduction opportunities overlooked by existing literature.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found