Final Adaptation Reinforcement Learning for N-Player Games
Konen, Wolfgang, Bagheri, Samineh
–arXiv.org Artificial Intelligence
This paper covers n-tuple-based reinforcement learning (RL) algorithms for games. We present new algorithms for TD-, SARSA- and Q-learning which work seamlessly on various games with arbitrary number of players. This is achieved by taking a player-centered view where each player propagates his/her rewards back to previous rounds. We add a new element called Final Adaptation RL (FARL) to all these algorithms. Our main contribution is that FARL is a vitally important ingredient to achieve success with the player-centered view in various games. We report results on seven board games with 1, 2 and 3 players, including Othello, ConnectFour and Hex. In most cases it is found that FARL is important to learn a near-perfect playing strategy. All algorithms are available in the GBG framework on GitHub.
arXiv.org Artificial Intelligence
Nov-29-2021
- Country:
- Asia > Vietnam
- Long An Province (0.04)
- Europe > Germany (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Vietnam
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment > Games (1.00)
- Technology: