Pretrain Soft Q-Learning with Imperfect Demonstrations
Zhang, Xiaoqin, Li, Yunfei, Ma, Huimin, Luo, Xiong
Pretraining reinforcement learning methods with demonstrations has been an important concept in the study of reinforcement learning since a large amount of computing power is spent on online simulations with existing reinforcement learning algorithms. Pretraining reinforcement learning remains a significant challenge in exploiting expert demonstrations whilst keeping exploration potentials, especially for value based methods. In this paper, we propose a pretraining method for soft Q-learning. Our work is inspired by pretraining methods for actor-critic algorithms since soft Q-learning is a value based algorithm that is equivalent to policy gradient. The proposed method is based on $\gamma$-discounted biased policy evaluation with entropy regularization, which is also the updating target of soft Q-learning. Our method is evaluated on various tasks from Atari 2600. Experiments show that our method effectively learns from imperfect demonstrations, and outperforms other state-of-the-art methods that learn from expert demonstrations.
May-9-2019
- Genre:
- Instructional Material > Course Syllabus & Notes (0.69)
- Research Report (1.00)
- Industry:
- Leisure & Entertainment > Games (0.68)
- Technology: