Polynomial-time Approximation Scheme for Equilibriums of Games
Sun, Hongbo, Xia, Chongkun, Yuan, Bo, Wang, Xueqian, Liang, Bin
–arXiv.org Artificial Intelligence
Nash equilibrium[1] of normal-form game was proposed decades ago, yet even whether PTAS exists for it remains undecided, not to mention for equilibriums of games with dynamics. PTAS for equilibriums of games is important itself in game theory, and the confirmation of its existence may impact multi-agent reinforcement learning research. First, the existence of PTAS relates to the practicality of the amount of computational power in achieving equilibriums of large scale games. It has been proved that exactly computing a Nash equilibrium of a static game is in PPAD-hard class of complexity[2]. Ignoring the possibility that PPAD itself is of polynomial-time[3], PTAS describes methods that approximately compute Nash equilibriums efficiently. Second, the confirmation of previously unknown existence of PTAS for games implies possibility to fundamentally solve the problems of non-stationarity in training and curse of dimensionality[4] in multi-agent reinforcement learning at the same time. Both the two problems are related to the absence of PTAS for equilibriums of games. Non-stationarity in training relates to the fact that existing polynomial-time methods lack convergence guarantee to equilibriums, and curse of dimensionality relates to the fact that methods with convergence guarantee lack polynomial-time complexity.
arXiv.org Artificial Intelligence
Jan-1-2024
- Country:
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Leisure & Entertainment > Games (0.34)
- Technology: