Goto

Collaborating Authors

 equilibrium



The Best of Both Worlds in Network Population Games: Reaching Consensus & Convergence to Equilibrium

Neural Information Processing Systems

Reaching consensus and convergence to equilibrium are two major challenges of multi-agent systems. Although each has attracted significant attention, relatively few studies address both challenges at the same time. This paper examines the connection between the notions of consensus and equilibrium in a multi-agent system where multiple interacting sub-populations coexist. We argue that consensus can be seen as an intricate component of intra-population stability, whereas equilibrium can be seen as encoding inter-population stability. We show that smooth fictitious play, a well-known learning model in game theory, can achieve both consensus and convergence to equilibrium in diverse multi-agent settings. Moreover, we show that the consensus formation process plays a crucial role in the seminal thorny problem of equilibrium selection in multi-agent learning.



A Proof A.1 Proof of Theorem 1 We leverage the results in [ 49

Neural Information Processing Systems

Lemma 3. Consider the ReLU activation The proof of Theorem 1 is given below. The inequality 3 uses strictly monotone property of p () . Code is available at this link. The neural networks are updated using Adam with learning rate initializes at 0.035 and All of them have no communication constraints. The training time is shown in Table 1.





A Missing statements and proofs 521 A.1 Statements for Section 3.1

Neural Information Processing Systems

Let a two-player Markov game where both players affect the transition. As we have seen in Section 2.1, in the case of unilateral deviation from joint policy Let a (possibly correlated) joint policy ˆ σ . By Lemma A.1, we know that Where the equality holds due to the zero-sum property, (1). An approximate NE is an approximate global minimum. An approximate global minimum is an approximate NE.