$\widetilde{O}(T^{-1})$ Convergence to (Coarse) Correlated Equilibria in Full-Information General-Sum Markov Games
Mao, Weichao, Qiu, Haoran, Wang, Chen, Franke, Hubertus, Kalbarczyk, Zbigniew, Başar, Tamer
–arXiv.org Artificial Intelligence
No-regret learning has a long history of being closely connected to game theory. Recent works have devised uncoupled no-regret learning dynamics that, when adopted by all the players in normal-form games, converge to various equilibrium solutions at a near-optimal rate of $\widetilde{O}(T^{-1})$, a significant improvement over the $O(1/\sqrt{T})$ rate of classic no-regret learners. However, analogous convergence results are scarce in Markov games, a more generic setting that lays the foundation for multi-agent reinforcement learning. In this work, we close this gap by showing that the optimistic-follow-the-regularized-leader (OFTRL) algorithm, together with appropriate value update procedures, can find $\widetilde{O}(T^{-1})$-approximate (coarse) correlated equilibria in full-information general-sum Markov games within $T$ iterations. Numerical results are also included to corroborate our theoretical findings.
arXiv.org Artificial Intelligence
Apr-23-2024
- Country:
- North America > United States > Illinois (0.14)
- Genre:
- Research Report (0.64)
- Workflow (0.46)
- Industry:
- Leisure & Entertainment > Games (0.48)
- Technology: