Disentangling Transfer in Continual Reinforcement Learning
Wołczyk, Maciej, Zając, Michał, Pascanu, Razvan, Kuciński, Łukasz, Miłoś, Piotr
–arXiv.org Artificial Intelligence
The ability of continual learning systems to transfer knowledge from previously seen tasks in order to maximize performance on new tasks is a significant challenge for the field, limiting the applicability of continual learning solutions to realistic scenarios. Consequently, this study aims to broaden our understanding of transfer and its driving forces in the specific case of continual reinforcement learning. We adopt SAC as the underlying RL algorithm and Continual World as a suite of continuous control tasks. We systematically study how different components of SAC (the actor and the critic, exploration, and data) affect transfer efficacy, and we provide recommendations regarding various modeling options. The best set of choices, dubbed ClonEx-SAC, is evaluated on the recent Continual World benchmark. ClonEx-SAC achieves 87% final success rate compared to 80% of PackNet, the best method in the benchmark. Moreover, the transfer grows from 0.18 to 0.54 according to the metric provided by Continual World.
arXiv.org Artificial Intelligence
Sep-28-2022
- Country:
- Asia > Japan
- Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Poland
- Lesser Poland Province > Kraków (0.04)
- Masovia Province > Warsaw (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- Germany > Bavaria
- North America
- Canada > British Columbia
- United States
- California > Los Angeles County
- Long Beach (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Nevada (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- California > Los Angeles County
- Asia > Japan
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.68)
- Technology: