Revisiting Discrete Soft Actor-Critic
Zhou, Haibin, Lin, Zichuan, Li, Junyou, Fu, Qiang, Yang, Wei, Ye, Deheng
–arXiv.org Artificial Intelligence
We study the adaption of soft actor-critic (SAC) from continuous action space to discrete action space. We revisit vanilla SAC and provide an in-depth understanding of its Q value underestimation and performance instability issues when applied to discrete settings. We thereby propose entropy-penalty and double average Q-learning with Q-clip to address these issues. Extensive experiments on typical benchmarks with discrete action space, including Atari games and a large-scale MOBA game, show the efficacy of our proposed method.
arXiv.org Artificial Intelligence
Jul-13-2023
- Country:
- Asia > China (0.15)
- Europe > France (0.14)
- North America
- Puerto Rico (0.14)
- United States (0.14)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Leisure & Entertainment > Games > Computer Games (0.89)
- Technology: