Tewolde, Emanuel
Learning and Computation of $\Phi$-Equilibria at the Frontier of Tractability
Zhang, Brian Hu, Anagnostides, Ioannis, Tewolde, Emanuel, Berker, Ratip Emin, Farina, Gabriele, Conitzer, Vincent, Sandholm, Tuomas
$\Phi$-equilibria -- and the associated notion of $\Phi$-regret -- are a powerful and flexible framework at the heart of online learning and game theory, whereby enriching the set of deviations $\Phi$ begets stronger notions of rationality. Recently, Daskalakis, Farina, Fishelson, Pipis, and Schneider (STOC '24) -- abbreviated as DFFPS -- settled the existence of efficient algorithms when $\Phi$ contains only linear maps under a general, $d$-dimensional convex constraint set $\mathcal{X}$. In this paper, we significantly extend their work by resolving the case where $\Phi$ is $k$-dimensional; degree-$\ell$ polynomials constitute a canonical such example with $k = d^{O(\ell)}$. In particular, positing only oracle access to $\mathcal{X}$, we obtain two main positive results: i) a $\text{poly}(n, d, k, \text{log}(1/\epsilon))$-time algorithm for computing $\epsilon$-approximate $\Phi$-equilibria in $n$-player multilinear games, and ii) an efficient online algorithm that incurs average $\Phi$-regret at most $\epsilon$ using $\text{poly}(d, k)/\epsilon^2$ rounds. We also show nearly matching lower bounds in the online learning setting, thereby obtaining for the first time a family of deviations that captures the learnability of $\Phi$-regret. From a technical standpoint, we extend the framework of DFFPS from linear maps to the more challenging case of maps with polynomial dimension. At the heart of our approach is a polynomial-time algorithm for computing an expected fixed point of any $\phi : \mathcal{X} \to \mathcal{X}$ based on the ellipsoid against hope (EAH) algorithm of Papadimitriou and Roughgarden (JACM '08). In particular, our algorithm for computing $\Phi$-equilibria is based on executing EAH in a nested fashion -- each step of EAH itself being implemented by invoking a separate call to EAH.
Expected Variational Inequalities
Zhang, Brian Hu, Anagnostides, Ioannis, Tewolde, Emanuel, Berker, Ratip Emin, Farina, Gabriele, Conitzer, Vincent, Sandholm, Tuomas
Variational inequalities (VIs) encompass many fundamental problems in diverse areas ranging from engineering to economics and machine learning. However, their considerable expressivity comes at the cost of computational intractability. In this paper, we introduce and analyze a natural relaxation -- which we refer to as expected variational inequalities (EVIs) -- where the goal is to find a distribution that satisfies the VI constraint in expectation. By adapting recent techniques from game theory, we show that, unlike VIs, EVIs can be solved in polynomial time under general (nonmonotone) operators. EVIs capture the seminal notion of correlated equilibria, but enjoy a greater reach beyond games. We also employ our framework to capture and generalize several existing disparate results, including from settings such as smooth games, and games with coupled constraints or nonconcave utilities.
Computing Game Symmetries and Equilibria That Respect Them
Tewolde, Emanuel, Zhang, Brian Hu, Oesterheld, Caspar, Sandholm, Tuomas, Conitzer, Vincent
Strategic interactions can be represented more concisely, and analyzed and solved more efficiently, if we are aware of the symmetries within the multiagent system. Symmetries also have conceptual implications, for example for equilibrium selection. We study the computational complexity of identifying and using symmetries. Using the classical framework of normal-form games, we consider game symmetries that can be across some or all players and/or actions. We find a strong connection between game symmetries and graph automorphisms, yielding graph automorphism and graph isomorphism completeness results for characterizing the symmetries present in a game. On the other hand, we also show that the problem becomes polynomial-time solvable when we restrict the consideration of actions in one of two ways. Next, we investigate when exactly game symmetries can be successfully leveraged for Nash equilibrium computation. We show that finding a Nash equilibrium that respects a given set of symmetries is PPAD- and CLS-complete in general-sum and team games respectively -- that is, exactly as hard as Brouwer fixed point and gradient descent problems. Finally, we present polynomial-time methods for the special cases where we are aware of a vast number of symmetries, or where the game is two-player zero-sum and we do not even know the symmetries.
Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback
Conitzer, Vincent, Freedman, Rachel, Heitzig, Jobst, Holliday, Wesley H., Jacobs, Bob M., Lambert, Nathan, Mossรฉ, Milan, Pacuit, Eric, Russell, Stuart, Schoelkopf, Hailey, Tewolde, Emanuel, Zwicker, William S.
Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about "collective" preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023.
The Computational Complexity of Single-Player Imperfect-Recall Games
Tewolde, Emanuel, Oesterheld, Caspar, Conitzer, Vincent, Goldberg, Paul W.
It turns out there are a number of reasons why imperfect recall is relevant for AI agents; moreover, in cases where it is We study single-player extensive-form games with relevant, it is clear what the agent will and will not remember imperfect recall, such as the Sleeping Beauty problem - unlike in the case of human memory, which is harder to predict or the Absentminded Driver game. For such and consequently to model in standard representations of games, two natural equilibrium concepts have been imperfect recall. Imperfect-recall games already appear in the proposed as alternative solution concepts to ex-ante AI literature in the context of solving very large games such optimality. One equilibrium concept uses generalized as poker: one technique for solving such games is abstraction double halving (GDH) as a belief system and - i.e., reducing the game to a smaller, simplified one to solve evidential decision theory (EDT), and another one instead - and this process can give rise to imperfect recall in uses generalized thirding (GT) as a belief system the abstracted game [Waugh et al., 2009; Lanctot et al., 2012; and causal decision theory (CDT).