Rejoinder: New Objectives for Policy Learning

Kallus, Nathan

arXiv.org Machine Learning 

I would like thank the discussants, Oliver Dukes and Stijn Vansteelandt (DV), Sijia Li, Xiudi Li, Alex Luedtkeand (LLL), and Muxuan Liang and Yingqi Zhao (LZ), for a very thoughtful discussion both of my contribution (Kallus 2020) and of Mo et al. (2020). I similarly thank the editors for putting together this exciting special issue and for curating a timely discussion on new objectives for policy learning. I found the juxtaposition between the two papers particularly apt: while my paper tries to induce an optimal covariate shift based on the premise of invariance, Mo et al. (2020) try to be robust to an undesirable covariate shift for fear of variations. While one optimistically alters the training population, the other pessimistically considers the worst-possible testing population. In the following I review some discussant comments that stood out to me as particularly keenly perceptive and offer some reflections.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found