Goto

Collaborating Authors

 gendice


Appendix AAdditionaltable Table2presentsthenumericalresultsfortheablationstudyinSection4.2

Neural Information Processing Systems

The results of our main method in Section 4.1 is reported in column Main. Testdenotes the variant of using the estimated reward function as the test function when trainingtheMIWω. Thismayberelatedtotheunstable estimation ofKL-dual discussed in Section3.2. Removing rollout data in the policy learning generally leads to worse performance and larger standard deviations. From Eq. (22), the MIWω can be optimized via two alternativeapproaches.(1)Wecan


about the assumptions, related work, and evaluation. 2 CONTENT

Neural Information Processing Systems

We thank all reviewers for their valuable time and feedback. Note that multiple recent works (offline and online) simply assume a linear MDP with known features in analysis. KL-divergence formulation to impose different distribution priors when available. We agree about Section 3.3 and in retrospect should have saved We will remove it and move some of the Appendix into the paper. We will add references to maximum-entropy approaches in RL and IRL.


GenDICE: Generalized Offline Estimation of Stationary Values

Zhang, Ruiyi, Dai, Bo, Li, Lihong, Schuurmans, Dale

arXiv.org Machine Learning

An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain. In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without additional interaction with the environment being available. We show that consistent estimation remains possible in this challenging scenario, and that effective estimation can still be achieved in important applications. Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions, derived from fundamental properties of the stationary distribution, and exploiting constraint reformulations based on variational divergence minimization. The resulting algorithm, GenDICE, is straightforward and effective. We prove its consistency under general conditions, provide an error analysis, and demonstrate strong empirical performance on benchmark problems, including off-line PageRank and off-policy policy evaluation.


GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values

Zhang, Shangtong, Liu, Bo, Whiteson, Shimon

arXiv.org Machine Learning

We present GradientDICE for estimating the density ratio between the state distribution of the target policy and the sampling distribution in off-policy reinforcement learning. GradientDICE fixes several problems with GenDICE (Zhang et al., 2020), the current state-of-the-art for estimating such density ratios. Namely, the optimization problem in GenDICE is not a convex-concave saddle-point problem once nonlinearity in optimization variable parameterization is introduced, so primal-dual algorithms are not guaranteed to find the desired solution. However, such nonlinearity is essential to ensure the consistency of GenDICE even with a tabular representation. This is a fundamental contradiction, resulting from GenDICE's original formulation of the optimization problem. In GradientDICE, we optimize a different objective from GenDICE by using the Perron-Frobenius theorem and eliminating GenDICE's use of divergence. Consequently, nonlinearity in parameterization is not necessary for GradientDICE, which is provably convergent under linear function approximation.