When is Mean-Field Reinforcement Learning Tractable and Relevant?
Yardim, Batuhan, Goldman, Artur, He, Niao
–arXiv.org Artificial Intelligence
Mean-field reinforcement learning has become a popular theoretical framework for efficiently approximating large-scale multi-agent reinforcement learning (MARL) problems exhibiting symmetry. However, questions remain regarding the applicability of mean-field approximations: in particular, their approximation accuracy of real-world systems and conditions under which they become computationally tractable. We establish explicit finite-agent bounds for how well the MFG solution approximates the true $N$-player game for two popular mean-field solution concepts. Furthermore, for the first time, we establish explicit lower bounds indicating that MFGs are poor or uninformative at approximating $N$-player games assuming only Lipschitz dynamics and rewards. Finally, we analyze the computational complexity of solving MFGs with only Lipschitz properties and prove that they are in the class of \textsc{PPAD}-complete problems conjectured to be intractable, similar to general sum $N$ player games. Our theoretical results underscore the limitations of MFGs and complement and justify existing work by proving difficulty in the absence of common theoretical assumptions.
arXiv.org Artificial Intelligence
Feb-8-2024
- Country:
- Asia > Russia (0.04)
- Europe
- Russia > Central Federal District
- Moscow Oblast > Moscow (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- Russia > Central Federal District
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- Nevada > Clark County
- Las Vegas (0.04)
- Texas > Parker County (0.04)
- Nevada > Clark County
- Canada > Quebec
- Oceania > New Zealand
- North Island > Auckland Region > Auckland (0.04)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Energy (0.48)
- Technology: