SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep Reinforcement Learning
Lee, Kimin, Laskin, Michael, Srinivas, Aravind, Abbeel, Pieter
–arXiv.org Artificial Intelligence
Model-free deep reinforcement learning (RL) has been successful in a range of challenging domains. However, there are some remaining issues, such as stabilizing the optimization of nonlinear function approximators, preventing error propagation due to the Bellman backup in Q-learning, and efficient exploration. To mitigate these issues, we present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy RL algorithms. SUNRISE integrates three key ingredients: (a) bootstrap with random initialization which improves the stability of the learning process by training a diverse ensemble of agents, (b) weighted Bellman backups, which prevent error propagation in Q-learning by reweighing sample transitions based on uncertainty estimates from the ensembles, and (c) an inference method that selects actions using highest upper-confidence bounds for efficient exploration. Our experiments show that SUNRISE significantly improves the performance of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, for both continuous and discrete control tasks on both low-dimensional and high-dimensional environments. Our training code is available at https://github.com/pokaxpoka/sunrise.
arXiv.org Artificial Intelligence
Jul-21-2020
- Genre:
- Research Report (1.00)
- Industry:
- Energy (0.93)
- Leisure & Entertainment > Games
- Computer Games (0.48)
- Technology: