Goto

Collaborating Authors

 exploitability




Appendix for " Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games " T able of Contents

Neural Information Processing Systems

A.1 Proof of Theorem 1 To prove Theorem 1, we need the help of the following Lemma Lemma 1. See Proposition 7.1 in [3]. Now we can prove our Theorem 1. Proof. Therefore, the distribution of state-action is equivalent to the distribution of the action. A.3 Proof of Theorem 3 Now let us first restate the propositions. PE is equivalent to exploitability.



Exploitability Minimization in Games and Beyond

Neural Information Processing Systems

Pseudo-games are a natural and well-known generalization of normal-form games, in which the actions taken by each player affect not only the other players' payoffs, as in games, but also the other players' strategy sets. The solution concept par excellence for pseudo-games is the generalized Nash equilibrium (GNE), i.e., a strategy profile at which each player's strategy is feasible and no player can improve their payoffs by unilaterally deviating to another strategy in the strategy set determined by the other players' strategies. The computation of GNE in pseudo-games has long been a problem of interest, due to applications in a wide variety of fields, from environmental protection to logistics to telecommunications. Although computing GNE is PPAD-hard in general, it is still of interest to try to compute them in restricted classes of pseudo-games. One approach is to search for a strategy profile that minimizes exploitability, i.e., the sum of the regrets across all players. As exploitability is nondifferentiable in general, developing efficient first-order methods that minimize it might not seem possible at first glance. We observe, however, that the exploitability-minimization problem can be recast as a min-max optimization problem, and thereby obtain polynomial-time first-order methods to compute a refinement of GNE, namely the variational equilibria (VE), in convex-concave cumulative regret pseudo-games with jointly convex constraints. More generally, we also show that our methods find the stationary points of the exploitability in polynomial time in Lipschitz-smooth pseudo-games with jointly convex constraints. Finally, we demonstrate in experiments that our methods not only outperform known algorithms, but that even in pseudo-games where they are not guaranteed to converge to a GNE, they may do so nonetheless, with proper initialization.





Supplementary Materials For XDO: A Double Oracle Algorithm for Extensive-Form Games 1 Proofs Proposition 1. In XDO with an null

Neural Information Processing Systems

's population policies chooses action In a given iteration, consider the restricted game for a single GMP game. If player 2 is not allowed an action unavailable to player 1, player 2's BR will be a new action In pk,m q-clone GMP with n classes, XDO adds at most 2 n actions for each player . In total, 2n actions may be added for each player.Proposition 6. Like in that work, we represent actions that are in the restricted game by bold arrows. Extensive-form pure strategies specify an action at every infostate.