Learning in Discounted-cost and Average-cost Mean-field Games
Anahtarcı, Berkay, Karıksız, Can Deha, Saldi, Naci
–arXiv.org Artificial Intelligence
We consider learning approximate Nash equilibria for discrete-time mean-field games with nonlinear stochastic state dynamics subject to both average and discounted costs. To this end, we introduce a mean-field equilibrium (MFE) operator, whose fixed point is a mean-field equilibrium (i.e. equilibrium in the infinite population limit). We first prove that this operator is a contraction, and propose a learning algorithm to compute an approximate mean-field equilibrium by approximating the MFE operator with a random one. Moreover, using the contraction property of the MFE operator, we establish the error analysis of the proposed learning algorithm. We then show that the learned mean-field equilibrium constitutes an approximate Nash equilibrium for finite-agent games.
arXiv.org Artificial Intelligence
Nov-10-2022
- Country:
- Asia
- Japan (0.04)
- Middle East > Republic of Türkiye
- Ankara Province > Ankara (0.04)
- Istanbul Province > Istanbul (0.04)
- Europe > Middle East
- Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- North America > United States
- Illinois > Cook County
- Chicago (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- New York (0.04)
- Illinois > Cook County
- Asia
- Genre:
- Research Report (0.40)
- Technology: