An Advantage-based Optimization Method for Reinforcement Learning in Large Action Space
Lin, Hai, Huang, Cheng, Chen, Zhihong
–arXiv.org Artificial Intelligence
Reinforcement learning tasks in real-world scenarios often involve large, high-dimensional action spaces, leading to challenges such as convergence difficulties, instability, and high computational complexity. It is widely acknowledged that traditional value-based reinforcement learning algorithms struggle to address these issues effectively. A prevalent approach involves generating independent sub-actions within each dimension of the action space. However, this method introduces bias, hindering the learning of optimal policies. In this paper, we propose an advantage-based optimization method and an algorithm named Advantage Branching Dueling Q-network (ABQ). ABQ incorporates a baseline mechanism to tune the action value of each dimension, leveraging the advantage relationship across different sub-actions. With this approach, the learned policy can be optimized for each dimension. Empirical results demonstrate that ABQ outperforms BDQ, achieving 3%, 171%, and 84% more cumulative rewards in HalfCheetah, Ant, and Humanoid environments, respectively. Furthermore, ABQ exhibits competitive performance when compared against two continuous action benchmark algorithms, DDPG and TD3.
arXiv.org Artificial Intelligence
Dec-17-2024
- Country:
- Asia > China > Hubei Province (0.28)
- Genre:
- Overview (0.68)
- Research Report (0.84)
- Industry:
- Information Technology (0.93)
- Technology: