Enabling Off-Policy Imitation Learning with Deep Actor Critic Stabilization
Sen, Sayambhu, Bhatnagar, Shalabh
–arXiv.org Artificial Intelligence
Learning complex policies with Reinforcement Learning (RL) is often hindered by instability and slow convergence, a problem exacerbated by the difficulty of reward engineering. Imitation Learning (IL) from expert demonstrations bypasses this reliance on rewards. However, state-of-the-art IL methods, exemplified by Generative Adversarial Imitation Learning (GAIL)Ho et. al, suffer from severe sample inefficiency. This is a direct consequence of their foundational on-policy algorithms, such as TRPO Schulman et.al. In this work, we introduce an adversarial imitation learning algorithm that incorporates off-policy learning to improve sample efficiency. By combining an off-policy framework with auxiliary techniques specifically, double Q network based stabilization and value learning without reward function inference we demonstrate a reduction in the samples required to robustly match expert behavior.
arXiv.org Artificial Intelligence
Nov-11-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Technology: