Game Theory: Overviews
Revisiting Game-Theoretic Control in Socio-Technical Networks: Emerging Design Frameworks and Contemporary Applications
Socio-technical networks represent emerging cyber-physical infrastructures that are tightly interwoven with human networks. The coupling between human and technical networks presents significant challenges in managing, controlling, and securing these complex, interdependent systems. This paper investigates game-theoretic frameworks for the design and control of socio-technical networks, with a focus on critical applications such as misinformation management, infrastructure optimization, and resilience in socio-cyber-physical systems (SCPS). Core methodologies, including Stackelberg games, mechanism design, and dynamic game theory, are examined as powerful tools for modeling interactions in hierarchical, multi-agent environments. Key challenges addressed include mitigating human-driven vulnerabilities, managing large-scale system dynamics, and countering adversarial threats. By bridging individual agent behaviors with overarching system goals, this work illustrates how the integration of game theory and control theory can lead to robust, resilient, and adaptive socio-technical networks. This paper highlights the potential of these frameworks to dynamically align decentralized agent actions with system-wide objectives of stability, security, and efficiency.
A Finite-Sample Analysis of Payoff-Based Independent Learning in Zero-Sum Stochastic Games
In this work, we study two-player zero-sum stochastic games and develop a variant of the smoothed best-response learning dynamics that combines independent learning dynamics for matrix games with the minimax value iteration for stochastic games. The resulting learning dynamics are payoff-based, convergent, rational, and symmetric between the two players.
Stackelberg Games with $k$-Submodular Function under Distributional Risk-Receptiveness and Robustness
Park, Seonghun, Bansal, Manish
We study submodular optimization in adversarial context, applicable to machine learning problems such as feature selection using data susceptible to uncertainties and attacks. We focus on Stackelberg games between an attacker (or interdictor) and a defender where the attacker aims to minimize the defender's objective of maximizing a $k$-submodular function. We allow uncertainties arising from the success of attacks and inherent data noise, and address challenges due to incomplete knowledge of the probability distribution of random parameters. Specifically, we introduce Distributionally Risk-Averse $k$-Submodular Interdiction Problem (DRA $k$-SIP) and Distributionally Risk-Receptive $k$-Submodular Interdiction Problem (DRR $k$-SIP) along with finitely convergent exact algorithms for solving them. The DRA $k$-SIP solution allows risk-averse interdictor to develop robust strategies for real-world uncertainties. Conversely, DRR $k$-SIP solution suggests aggressive tactics for attackers, willing to embrace (distributional) risk to inflict maximum damage, identifying critical vulnerable components, which can be used for the defender's defensive strategies. The optimal values derived from both DRA $k$-SIP and DRR $k$-SIP offer a confidence interval-like range for the expected value of the defender's objective function, capturing distributional ambiguity. We conduct computational experiments using instances of feature selection and sensor placement problems, and Wisconsin breast cancer data and synthetic data, respectively.
Policy Space Response Oracles: A Survey
Bighashdel, Ariyan, Wang, Yongzhao, McAleer, Stephen, Savani, Rahul, Oliehoek, Frans A.
Game theory provides a mathematical way to study the interaction between multiple decision makers. However, classical game-theoretic analysis is limited in scalability due to the large number of strategies, precluding direct application to more complex scenarios. This survey provides a comprehensive overview of a framework for large games, known as Policy Space Response Oracles (PSRO), which holds promise to improve scalability by focusing attention on sufficient subsets of strategies. We first motivate PSRO and provide historical context. We then focus on the strategy exploration problem for PSRO: the challenge of assembling effective subsets of strategies that still represent the original game well with minimum computational cost. We survey current research directions for enhancing the efficiency of PSRO, and explore the applications of PSRO across various domains. We conclude by discussing open questions and future research.
Note: Harnessing Tellurium Nanoparticles in the Digital Realm Plasmon Resonance, in the Context of Brewster's Angle and the Drude Model for Fake News Adsorption in Incomplete Information Games
This note explores the innovative application of soliton theory and plasmonic phenomena in modeling user behavior and engagement within digital health platforms. By introducing the concept of soliton solutions, we present a novel approach to understanding stable patterns of health improvement behaviors over time. Additionally, we delve into the role of tellurium nanoparticles and their plasmonic properties in adsorbing fake news, thereby influencing user interactions and engagement levels. Through a theoretical framework that combines nonlinear dynamics with the unique characteristics of tellurium nanoparticles, we aim to provide new insights into the dynamics of user engagement in digital health environments. Our analysis highlights the potential of soliton theory in capturing the complex, nonlinear dynamics of user behavior, while the application of plasmonic phenomena offers a promising avenue for enhancing the sensitivity and effectiveness of digital health platforms. This research ventures into an uncharted territory where optical phenomena such as Brewster's Angle and Snell's Law, along with the concept of spin solitons, are metaphorically applied to address the challenge of fake news dissemination. By exploring the analogy between light refraction, reflection, and the propagation of information in digital platforms, we unveil a novel perspective on how the 'angle' at which information is presented can significantly affect its acceptance and spread. Additionally, we propose the use of tellurium nanoparticles to manage 'information waves' through mechanisms akin to plasmonic resonance and soliton dynamics. This theoretical exploration aims to bridge the gap between physical sciences and digital communication, offering insights into the development of strategies for mitigating misinformation.
Rational inference of relative preferences
Statistical decision theory axiomatically assumes that the relative desirability of different options that humans perceive is well described by assigning them optionspecific scalar utility functions. However, this assumption is refuted by observed human behavior, including studies wherein preferences have been shown to change systematically simply through variation in the set of choice options presented. In this paper, we show that interpreting desirability as a relative comparison between available options at any particular decision instance results in a rational theory of value-inference that explains heretofore intractable violations of rational choice behavior in human subjects. Complementarily, we also characterize the conditions under which a rational agent selecting optimal options indicated by dynamic value inference in our framework will behave identically to one whose preferences are encoded using a static ordinal utility function.
Algorithmic Bayesian Epistemology
One aspect of the algorithmic lens in theoretical computer science is a view on other scientific disciplines that focuses on satisfactory solutions that adhere to real-world constraints, as opposed to solutions that would be optimal ignoring such constraints. The algorithmic lens has provided a unique and important perspective on many academic fields, including molecular biology, ecology, neuroscience, quantum physics, economics, and social science. This thesis applies the algorithmic lens to Bayesian epistemology. Traditional Bayesian epistemology provides a comprehensive framework for how an individual's beliefs should evolve upon receiving new information. However, these methods typically assume an exhaustive model of such information, including the correlation structure between different pieces of evidence. In reality, individuals might lack such an exhaustive model, while still needing to form beliefs. Beyond such informational constraints, an individual may be bounded by limited computation, or by limited communication with agents that have access to information, or by the strategic behavior of such agents. Even when these restrictions prevent the formation of a *perfectly* accurate belief, arriving at a *reasonably* accurate belief remains crucial. In this thesis, we establish fundamental possibility and impossibility results about belief formation under a variety of restrictions, and lay the groundwork for further exploration.
Statistical Games
This work contains the mathematical exploration of a few prototypical games in which central concepts from statistics and probability theory naturally emerge. The first two kinds of games are termed Fisher and Bayesian games, which are connected to Frequentist and Bayesian statistics, respectively. Later, a more general type of game is introduced, termed Statistical game, in which a further parameter, the players' relative risk aversion, can be set. In this work, we show that Fisher and Bayesian games can be viewed as limiting cases of Statistical games. Therefore, Statistical games can be viewed as a unified framework, incorporating both Frequentist and Bayesian statistics. Furthermore, a philosophical framework is (re-)presented -- often referred to as minimax regret criterion -- as a general approach to decision making. The main motivation for this work was to embed Bayesian statistics into a broader decision-making framework, where, based on collected data, actions with consequences have to be made, which can be translated to utilities (or rewards/losses) of the decision-maker. The work starts with the simplest possible toy model, related to hypothesis testing and statistical inference. This choice has two main benefits: i.) it allows us to determine (conjecture) the behaviour of the equilibrium strategies in various limiting cases ii.) this way, we can introduce Statistical games without requiring additional stochastic parameters. The work contains game theoretical methods related to two-player, non-cooperative games to determine and prove equilibrium strategies of Fisher, Bayesian and Statistical games. It also relies on analytical tools for derivations concerning various limiting cases.
Learning in Mean Field Games: A Survey
Laurière, Mathieu, Perrin, Sarah, Pérolat, Julien, Girgin, Sertan, Muller, Paul, Élie, Romuald, Geist, Matthieu, Pietquin, Olivier
Non-cooperative and cooperative games with a very large number of players have many applications but remain generally intractable when the number of players increases. Introduced by Lasry and Lions, and Huang, Caines and Malham\'e, Mean Field Games (MFGs) rely on a mean-field approximation to allow the number of players to grow to infinity. Traditional methods for solving these games generally rely on solving partial or stochastic differential equations with a full knowledge of the model. Recently, Reinforcement Learning (RL) has appeared promising to solve complex problems at scale. The combination of RL and MFGs is promising to solve games at a very large scale both in terms of population size and environment complexity. In this survey, we review the quickly growing recent literature on RL methods to learn equilibria and social optima in MFGs. We first identify the most common settings (static, stationary, and evolutive) of MFGs. We then present a general framework for classical iterative methods (based on best-response computation or policy evaluation) to solve MFGs in an exact way. Building on these algorithms and the connection with Markov Decision Processes, we explain how RL can be used to learn MFG solutions in a model-free way. Last, we present numerical illustrations on a benchmark problem, and conclude with some perspectives.
Universal Imitation Games
Alan Turing proposed in 1950 a framework called an imitation game to decide if a machine could think. Using mathematics developed largely after Turing -- category theory -- we analyze a broader class of universal imitation games (UIGs), which includes static, dynamic, and evolutionary games. In static games, the participants are in a steady state. In dynamic UIGs, "learner" participants are trying to imitate "teacher" participants over the long run. In evolutionary UIGs, the participants are competing against each other in an evolutionary game, and participants can go extinct and be replaced by others with higher fitness. We use the framework of category theory -- in particular, two influential results by Yoneda -- to characterize each type of imitation game. Universal properties in categories are defined by initial and final objects. We characterize dynamic UIGs where participants are learning by inductive inference as initial algebras over well-founded sets, and contrast them with participants learning by conductive inference over the final coalgebra of non-well-founded sets. We briefly discuss the extension of our categorical framework for UIGs to imitation games on quantum computers.