Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation

Wang, Ruohan, Ciliberto, Carlo, Amadori, Pierluigi, Demiris, Yiannis

arXiv.org Machine Learning 

We consider a specific setting of imitation learning - the task of policy learning from expert demonstrations - in which the learner only has a finite number of expert trajectories without any further access to the expert. Two broad categories of approaches to this settings are behavioral cloning (BC) Pomerleau (1991), which directly learns a policy mapping from states to actions with supervised learning from expert trajectories; and inverse reinforcement learning (IRL) Ng & Russell (2000); Abbeel & Ng (2004), which learns a policy via reinforcement learning, using a cost function extracted from expert trajectories. Most notably, BC has been successfully applied to the task of autonomous driving Bojarski et al. (2016); Bansal et al. (2018). Despite its simplicity, BC typically requires a large amount of training data to learn good policies, as it may suffer from compounding errors caused by covariate shift Ross & Bagnell (2010); Ross et al. (2011). BC is often used as a policy initialization step for further reinforcement learning Nagabandi et al. (2018); Rajeswaran et al. (2017). IRL estimates a cost function from expert trajectories and uses reinforcement learning to derive policies. As the cost function evaluates the quality of trajectories rather than that of individual actions, IRL avoids the problem of compounding errors. IRL is effective with a wide range of problems, from continuous control benchmarks in the Mujoco environment Ho & Ermon (2016), to robot footsteps planning Ziebart et al. (2008). Generative Adversarial Imitation Learning (GAIL) Ho & Ermon (2016); Baram et al. (2017) connects IRL to the general framework of Generative Adversarial Networks (GANs) Goodfellow et al.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found