Goto

Collaborating Authors

 Game Theory: Overviews


Securing NextG Systems against Poisoning Attacks on Federated Learning: A Game-Theoretic Solution

arXiv.org Artificial Intelligence

This paper studies the poisoning attack and defense interactions in a federated learning (FL) system, specifically in the context of wireless signal classification using deep learning for next-generation (NextG) communications. FL collectively trains a global model without the need for clients to exchange their data samples. By leveraging geographically dispersed clients, the trained global model can be used for incumbent user identification, facilitating spectrum sharing. However, in this distributed learning system, the presence of malicious clients introduces the risk of poisoning the training data to manipulate the global model through falsified local model exchanges. To address this challenge, a proactive defense mechanism is employed in this paper to make informed decisions regarding the admission or rejection of clients participating in FL systems. Consequently, the attack-defense interactions are modeled as a game, centered around the underlying admission and poisoning decisions. First, performance bounds are established, encompassing the best and worst strategies for attackers and defenders. Subsequently, the attack and defense utilities are characterized within the Nash equilibrium, where no player can unilaterally improve its performance given the fixed strategies of others. The results offer insights into novel operational modes that safeguard FL systems against poisoning attacks by quantifying the performance of both attacks and defenses in the context of NextG communications.


A survey on algorithms for Nash equilibria in finite normal-form games

arXiv.org Artificial Intelligence

Nash equilibrium is one of the most influential solution concepts in game theory. With the development of computer science and artificial intelligence, there is an increasing demand on Nash equilibrium computation, especially for Internet economics and multi-agent learning. This paper reviews various algorithms computing the Nash equilibrium and its approximation solutions in finite normal-form games from both theoretical and empirical perspectives. For the theoretical part, we classify algorithms in the literature and present basic ideas on algorithm design and analysis. For the empirical part, we present a comprehensive comparison on the algorithms in the literature over different kinds of games. Based on these results, we provide practical suggestions on implementations and uses of these algorithms. Finally, we present a series of open problems from both theoretical and practical considerations.


Stackelberg Driver Model for Continual Policy Improvement in Scenario-Based Closed-Loop Autonomous Driving

arXiv.org Artificial Intelligence

The deployment of autonomous vehicles (AVs) has faced hurdles due to the dominance of rare but critical corner cases within the long-tail distribution of driving scenarios, which negatively affects their overall performance. To address this challenge, adversarial generation methods have emerged as a class of efficient approaches to synthesize safety-critical scenarios for AV testing. However, these generated scenarios are often underutilized for AV training, resulting in the potential for continual AV policy improvement remaining untapped, along with a deficiency in the closed-loop design needed to achieve it. Therefore, we tailor the Stackelberg Driver Model (SDM) to accurately characterize the hierarchical nature of vehicle interaction dynamics, facilitating iterative improvement by engaging background vehicles (BVs) and AV in a sequential game-like interaction paradigm. With AV acting as the leader and BVs as followers, this leader-follower modeling ensures that AV would consistently refine its policy, always taking into account the additional information that BVs play the best response to challenge AV. Extensive experiments have shown that our algorithm exhibits superior performance compared to several baselines especially in higher dimensional scenarios, leading to substantial advancements in AV capabilities while continually generating progressively challenging scenarios. Code is available at https://github.com/BlueCat-de/SDM.


Scheming AIs: Will AIs fake alignment during training in order to get power?

arXiv.org Artificial Intelligence

This report examines whether advanced AIs that perform well in training will be doing so in order to gain power later -- a behavior I call "scheming" (also sometimes called "deceptive alignment"). I conclude that scheming is a disturbingly plausible outcome of using baseline machine learning methods to train goal-directed AIs sophisticated enough to scheme (my subjective probability on such an outcome, given these conditions, is roughly 25%). In particular: if performing well in training is a good strategy for gaining power (as I think it might well be), then a very wide variety of goals would motivate scheming -- and hence, good training performance. This makes it plausible that training might either land on such a goal naturally and then reinforce it, or actively push a model's motivations towards such a goal as an easy way of improving performance. What's more, because schemers pretend to be aligned on tests designed to reveal their motivations, it may be quite difficult to tell whether this has occurred. However, I also think there are reasons for comfort. In particular: scheming may not actually be such a good strategy for gaining power; various selection pressures in training might work against schemer-like goals (for example, relative to non-schemers, schemers need to engage in extra instrumental reasoning, which might harm their training performance); and we may be able to increase such pressures intentionally. The report discusses these and a wide variety of other considerations in detail, and it suggests an array of empirical research directions for probing the topic further.


Exploration noise for learning linear-quadratic mean field games

arXiv.org Artificial Intelligence

The goal of this paper is to demonstrate that common noise may serve as an exploration noise for learning the solution of a mean field game. This concept is here exemplified through a toy linear-quadratic model, for which a suitable form of common noise has already been proven to restore existence and uniqueness. We here go one step further and prove that the same form of common noise may force the convergence of the learning algorithm called `fictitious play', and this without any further potential or monotone structure. Several numerical examples are provided in order to support our theoretical analysis.


Slingshot Perturbation to Learning in Monotone Games

arXiv.org Artificial Intelligence

This paper addresses the problem of learning Nash equilibria in {\it monotone games} where the gradient of the payoff functions is monotone in the strategy profile space, potentially containing additive noise. The optimistic family of learning algorithms, exemplified by optimistic Follow-the-Regularized-Leader and optimistic Mirror Descent, successfully achieves last-iterate convergence in scenarios devoid of noise, leading the dynamics to a Nash equilibrium. A recent emerging trend underscores the promise of the perturbation approach, where payoff functions are perturbed based on the distance from an anchoring, or {\it slingshot}, strategy. In response, we first establish a unified framework for learning equilibria in monotone games, accommodating both full and noisy feedback. Second, we construct the convergence rates toward an approximated equilibrium, irrespective of noise presence. Thirdly, we introduce a twist by updating the slingshot strategy, anchoring the current strategy at finite intervals. This innovation empowers us to identify the exact Nash equilibrium of the underlying game with guaranteed rates. The proposed framework is all-encompassing, integrating existing payoff-perturbed algorithms. Finally, empirical demonstrations affirm that our algorithms, grounded in this framework, exhibit significantly accelerated convergence.


The strategy of conflict and cooperation

arXiv.org Artificial Intelligence

This paper introduces a unified framework called cooperative extensive form games, which (i) generalizes standard non-cooperative games, and (ii) allows for more complex coalition formation dynamics than previous concepts like coalition-proof Nash equilibrium. Central to this framework is a novel solution concept called cooperative equilibrium system (CES). CES differs from Nash equilibrium in two important respects. First, a CES is immune to both unilateral and multilateral `credible' deviations. Second, unlike Nash equilibrium, whose stability relies on the assumption that the strategies of non-deviating players are held fixed, CES allows for the possibility that players may regroup and adjust their strategies in response to a deviation. The main result establishes that every cooperative extensive form game, possibly with imperfect information, possesses a CES. For games with perfect information, the proof is constructive. This framework is broadly applicable in contexts such as oligopolistic markets and dynamic political bargaining.


The Complexity of Matching Games: A Survey

Journal of Artificial Intelligence Research

Matching games naturally generalize assignment games, a well-known class of cooperative games. Interest in matching games has grown recently due to some breakthrough results and new applications. This state-of-the-art survey provides an overview of matching games and extensions, such as b-matching games and partitioned matching games; the latter originating from the emerging area of international kidney exchange. In this survey we focus on computational complexity aspects of various game-theoretical solution concepts, such as the core, nucleolus and Shapley value, when the input is restricted to a matching game or one of its variants.


The Update Equivalence Framework for Decision-Time Planning

arXiv.org Artificial Intelligence

The process of revising (or constructing) a policy immediately prior to execution -- known as decision-time planning -- is key to achieving superhuman performance in perfect-information settings like chess and Go. A recent line of work has extended decision-time planning to more general imperfect-information settings, leading to superhuman performance in poker. However, these methods requires considering subgames whose sizes grow quickly in the amount of non-public information, making them unhelpful when the amount of non-public information is large. Motivated by this issue, we introduce an alternative framework for decision-time planning that is not based on subgames but rather on the notion of update equivalence. In this framework, decision-time planning algorithms simulate updates of synchronous learning algorithms. This framework enables us to introduce a new family of principled decision-time planning algorithms that do not rely on public information, opening the door to sound and effective decision-time planning in settings with large amounts of non-public information. In experiments, members of this family produce comparable or superior results compared to state-of-the-art approaches in Hanabi and improve performance in 3x3 Abrupt Dark Hex and Phantom Tic-Tac-Toe.


A Review On Game Theory With Smart Grid Security

arXiv.org Artificial Intelligence

Smart grid is the modern two way mechanism combining the power grid, control center, smart metering facility, energy routing and customer demand response services. The system being complicated, security vulnerabilities are paramount for the sound operation and process continuation. Since smart grid connects with the end user to the energy providers, these two parties can interact with each other within the whole energy management work flow. In this regard, game theory provides effective insights in the analysis of security measures for smart grid. The mentioned parties will be the players in the game model to provide a solution for the various threats to the grid aspects. In this work, a brief review has presented with the existing approaches to the threat models for divergent sectors of the smart grid. The solution approaches to these threats are based on the game theoretical approaches that connect the attackers and defenders in the scenarios.