Diffusing States and Matching Scores: A New Framework for Imitation Learning
Wu, Runzhe, Chen, Yiding, Swamy, Gokul, Brantley, Kianté, Sun, Wen
–arXiv.org Artificial Intelligence
Adversarial Imitation Learning is traditionally framed as a two-player zero-sum game between a learner and an adversarially chosen cost function, and can therefore be thought of as the sequential generalization of a Generative Adversarial Network (GAN). However, in recent years, diffusion models have emerged as a non-adversarial alternative to GANs that merely require training a score function via regression, yet produce generations of a higher quality. In response, we investigate how to lift insights from diffusion modeling to the sequential setting. We propose diffusing states and performing score-matching along diffused states to measure the discrepancy between the expert's and learner's states. Thus, our approach only requires training score functions to predict noises via standard regression, making it significantly easier and more stable to train than adversarial methods. Theoretically, we prove first-and second-order instance-dependent bounds with linear scaling in the horizon, proving that our approach avoids the compounding errors that stymie offline approaches to imitation learning. Empirically, we show our approach outperforms both GAN-style imitation learning baselines and discriminator-free imitation learning baselines across various continuous control problems, including complex tasks like controlling humanoids to walk, sit, crawl, and navigate through obstacles. Fundamentally, in imitation learning (IL, Osa et al. (2018)), we want to match the sequential behavior of an expert demonstrator. Different notions of what matching should mean for IL have been proposed in the literature, from f-divergences (Ho & Ermon, 2016; Ke et al., 2021) to Integral Probability Metrics (IPMs, Müller (1997); Sun et al. (2019); Kidambi et al. (2021); Swamy et al. (2021); Chang et al. (2021); Song et al. (2024)). To compute the chosen notion of divergence from the expert demonstrations so that the learner can then optimize it, it is common to train a discriminator (i.e. a classifier) between expert and learner data. This discriminator is then used as a reward function for a policy update, an approach known as inverse reinforcement learning (IRL, Abbeel & Ng (2004); Ziebart et al. (2008)).
arXiv.org Artificial Intelligence
Oct-17-2024