Goto

Collaborating Authors

 morimura


Reviews: State Aware Imitation Learning

Neural Information Processing Systems

This paper proposes a framework for MAP-estimation based imitation learning, which can be seen as adding to the standard supervised learning loss a cost of deviating from the observed state distribution. The key idea for optimizing such a loss is a gradient expression for the stationary distribution of a given policy that can be estimated online using policy rollouts. This idea is an extension of a previous result by Morimura (2010). I find the approach original, and potentially interesting to the NIPS community. The consequences of deviating from the demonstrated states in imitation learning have been recognized earlier, but this paper proposes a novel approach to this problem.


Geometry and convergence of natural policy gradient methods

Müller, Johannes, Montúfar, Guido

arXiv.org Artificial Intelligence

We study the convergence of several natural policy gradient (NPG) methods in infinite-horizon discounted Markov decision processes with regular policy parametrizations. For a variety of NPGs and reward functions we show that the trajectories in state-action space are solutions of gradient flows with respect to Hessian geometries, based on which we obtain global convergence guarantees and convergence rates. In particular, we show linear convergence for unregularized and regularized NPG flows with the metrics proposed by Kakade and Morimura and co-authors by observing that these arise from the Hessian geometries of conditional entropy and entropy respectively. Further, we obtain sublinear convergence rates for Hessian geometries arising from other convex functions like log-barriers. Finally, we interpret the discrete-time NPG methods with regularized rewards as inexact Newton methods if the NPG is defined with respect to the Hessian geometry of the regularizer. This yields local quadratic convergence rates of these methods for step size equal to the penalization strength.