Memory-efficient Reinforcement Learning with Value-based Knowledge Consolidation
Lan, Qingfeng, Pan, Yangchen, Luo, Jun, Mahmood, A. Rupam
–arXiv.org Artificial Intelligence
Artificial neural networks are promising for general function approximation but challenging to train on non-independent or non-identically distributed data due to catastrophic forgetting. The experience replay buffer, a standard component in deep reinforcement learning, is often used to reduce forgetting and improve sample efficiency by storing experiences in a large buffer and using them for training later. However, a large replay buffer results in a heavy memory burden, especially for onboard and edge devices with limited memory capacities. We propose memory-efficient reinforcement learning algorithms based on the deep Q-network algorithm to alleviate this problem. Our algorithms reduce forgetting and maintain high sample efficiency by consolidating knowledge from the target Q-network to the current Q-network. Compared to baseline methods, our algorithms achieve comparable or better performance in both feature-based and image-based tasks while easing the burden of large experience replay buffers.
arXiv.org Artificial Intelligence
Apr-10-2023
- Country:
- North America > Canada (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment > Games (0.69)
- Technology: