Reviews: On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Neural Information Processing Systems 

This work introduces a geometric analysis of the problem of inverse reinforcement learning (IRL) and proposes a formal guarantee for the optimality of the reward function, obtained from the empirical data. The authors also provide the sample complexity for their proposed l1-regularized Support Vector Machine formulation. In general, this is an interesting work with a significant contribution to the theoretical aspect of the inverse reinforcement learning problem. However, there are a few concerns that need to be addressed: Major: 1. The paper does not define the problem as a stand-alone question in the field. The problem formulation heavily relies on the previous work by Ng & Russel (2000) and is written only as a follow up to this work.