Goto

Collaborating Authors

 cooperative game theory



A Mechanism for Mutual Fairness in Cooperative Games with Replicable Resources -- Extended Version

Filter, Björn, Möller, Ralf, Özçep, Özgür Lütfü

arXiv.org Artificial Intelligence

The latest developments in AI focus on agentic systems where artificial and human agents cooperate to realize global goals. An example is collaborative learning, which aims to train a global model based on data from individual agents. A major challenge in designing such systems is to guarantee safety and alignment with human values, particularly a fair distribution of rewards upon achieving the global goal. Cooperative game theory offers useful abstractions of cooperating agents via value functions, which assign value to each coalition, and via reward functions. With these, the idea of fair allocation can be formalized by specifying fairness axioms and designing concrete mechanisms. Classical cooperative game theory, exemplified by the Shapley value, does not fully capture scenarios like collaborative learning, as it assumes nonreplicable resources, whereas data and models can be replicated. Infinite replicability requires a generalized notion of fairness, formalized through new axioms and mechanisms. These must address imbalances in reciprocal benefits among participants, which can lead to strategic exploitation and unfair allocations. The main contribution of this paper is a mechanism and a proof that it fulfills the property of mutual fairness, formalized by the Balanced Reciprocity Axiom. It ensures that, for every pair of players, each benefits equally from the participation of the other.


A Cooperative Game-Based Multi-Criteria Weighted Ensemble Approach for Multi-Class Classification

DongSeong-Yoon, null

arXiv.org Artificial Intelligence

Posted with permission from KICS (Aug 7, 2025). The published version may differ. Abstract --Since the Fourth Industrial Revolution, AI technology has been widely used in many fields, but there are several limitations that need to be overcome, including overfitting/underfitting, class imbalance, and the limitations of representation (hypothesis space) due to the characteristics of different models. As a method to overcome these problems, ensemble, commonly known as model combining, is being extensively used in the field of machine learning. Among ensemble learning methods, voting ensembles have been studied with various weighting methods, showing performance improvements. However, the existing methods that reflect the pre-information of classifiers in weights consider only one evaluation criterion, which limits the reflection of various information that should be considered in a model realistically. Therefore, this paper proposes a method of making decisions considering various information through c ooperative games in multi -criteria situations. Using this method, various types of information known beforehand in classifiers can be simultaneously considered and reflected, leading to appropriate weight distribution and performance improvement. The machine learning algorithms were applied to the Open - ML -CC18 dataset and compared with existing ensemble weighting methods. The experimental results showed superior performance compared to other weighting methods. I NTRODUCTION ecently, artificial intelligence (AI) has been making significant strides in various fields, backed by advancements in diverse methodologies, hardware development, interdisciplinary research, and trials across different domains[1] - [5].


CORA: Coalitional Rational Advantage Decomposition for Multi-Agent Policy Gradients

Ji, Mengda, Xu, Genjiu, Wang, Liying

arXiv.org Artificial Intelligence

This work focuses on the credit assignment problem in cooperative multi-agent reinforcement learning (MARL). Sharing the global advantage among agents often leads to suboptimal policy updates as it fails to account for the distinct contributions of agents. Although numerous methods consider global or individual contributions for credit assignment, a detailed analysis at the coalition level remains lacking in many approaches. This work analyzes the over-updating problem during multi-agent policy updates from a coalition-level perspective. To address this issue, we propose a credit assignment method called Coalitional Rational Advantage Decomposition (CORA). CORA evaluates coalitional advantages via marginal contributions from all possible coalitions and decomposes advantages using the core solution from cooperative game theory, ensuring coalitional rationality. To reduce computational overhead, CORA employs random coalition sampling. Experiments on matrix games, differential games, and multi-agent collaboration benchmarks demonstrate that CORA outperforms strong baselines, particularly in tasks with multiple local optima. These findings highlight the importance of coalition-aware credit assignment for improving MARL performance.


Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models

Idrissi, Marouane Il, Machado, Agathe Fernandes, Charpentier, Arthur

arXiv.org Machine Learning

Cooperative game theory has become a cornerstone of post-hoc interpretability in machine learning, largely through the use of Shapley values. Y et, despite their widespread adoption, Shapley-based methods often rest on axiomatic justifications whose relevance to feature attribution remains debatable. In this paper, we revisit cooperative game theory from an interpretability perspective and argue for a broader and more principled use of its tools. We highlight two general families of efficient allocations, the Weber and Harsanyi sets, that extend beyond Shapley values and offer richer interpretative flexibility. We present an accessible overview of these allocation schemes, clarify the distinction between value functions and aggregation rules, and introduce a three-step blueprint for constructing reliable and theoretically-grounded feature attributions. Our goal is to move beyond fixed axioms and provide the XAI community with a coherent framework to design attribution methods that are both meaningful and robust to shifting methodological trends.


Shapley Machine: A Game-Theoretic Framework for N-Agent Ad Hoc Teamwork

Wang, Jianhong, Li, Yang, Kaski, Samuel, Lawry, Jonathan

arXiv.org Artificial Intelligence

Open multi-agent systems are increasingly important in modeling real-world applications, such as smart grids, swarm robotics, etc. In this paper, we aim to investigate a recently proposed problem for open multi-agent systems, referred to as n-agent ad hoc teamwork (NAHT), where only a number of agents are controlled. Existing methods tend to be based on heuristic design and consequently lack theoretical rigor and ambiguous credit assignment among agents. To address these limitations, we model and solve NAHT through the lens of cooperative game theory. More specifically, we first model an open multi-agent system, characterized by its value, as an instance situated in a space of cooperative games, generated by a set of basis games. We then extend this space, along with the state space, to accommodate dynamic scenarios, thereby characterizing NAHT. Exploiting the justifiable assumption that basis game values correspond to a sequence of n-step returns with different horizons, we represent the state values for NAHT in a form similar to $λ$-returns. Furthermore, we derive Shapley values to allocate state values to the controlled agents, as credits for their contributions to the ad hoc team. Different from the conventional approach to shaping Shapley values in an explicit form, we shape Shapley values by fulfilling the three axioms uniquely describing them, well defined on the extended game space describing NAHT. To estimate Shapley values in dynamic scenarios, we propose a TD($λ$)-like algorithm. The resulting reinforcement learning (RL) algorithm is referred to as Shapley Machine. To our best knowledge, this is the first time that the concepts from cooperative game theory are directly related to RL concepts. In experiments, we demonstrate the effectiveness of Shapley Machine and verify reasonableness of our theory.


Alternative Methods to SHAP Derived from Properties of Kernels: A Note on Theoretical Analysis

Hiraki, Kazuhiro, Ishihara, Shinichi, Shino, Junnosuke

arXiv.org Artificial Intelligence

In the field of machine learning, Explainable Artificial Intelligence (XAI) refers to techniques and methods that make the decisions and predictions of machine learning models easier to understand. Among them, AFA (Additive Feature Attribution) is a method that decomposes a model's prediction into the contributions of individual features. Notably, SHAP (SHapley Additive exPlanations), proposed by [5], which is based on the Shapley value [8] in cooperative game theory, is well-known in this context. Recently, research on SHAP has been rapidly expanding ([4]). To reduce the computational cost of SHAP, various methods such as Tree-SHAP[5] and Fast SHAP [3] have been proposed and applied to actual data (for example, [2]). As an alternative to SHAP, [1] considers ES (Equal Surplus) and FESP (Fair Efficient Symmetric Perturbation), both of which are based on solution concepts in cooperative game theory. In this study, we investigate the relationship between AFA and the kernel in LIME (Local Interpretable Modelagnostic Explanations) as proposed by [6].


An Economic Solution to Copyright Challenges of Generative AI

Wang, Jiachen T., Deng, Zhun, Chiba-Okabe, Hiroaki, Barak, Boaz, Su, Weijie J.

arXiv.org Artificial Intelligence

Generative artificial intelligence (AI) systems are trained on large data corpora to generate new pieces of text, images, videos, and other media. There is growing concern that such systems may infringe on the copyright interests of training data contributors. To address the copyright challenges of generative AI, we propose a framework that compensates copyright owners proportionally to their contributions to the creation of AI-generated content. The metric for contributions is quantitatively determined by leveraging the probabilistic nature of modern generative AI models and using techniques from cooperative game theory in economics. This framework enables a platform where AI developers benefit from access to high-quality training data, thus improving model performance. Meanwhile, copyright owners receive fair compensation, driving the continued provision of relevant data for generative model training. Experiments demonstrate that our framework successfully identifies the most relevant data sources used in artwork generation, ensuring a fair and interpretable distribution of revenues among copyright owners.


Modeling and analysis of pHRI with Differential Game Theory

Franceschi, Paolo, Beschi, Manuel, Pedrocchi, Nicola, Valente, Anna

arXiv.org Artificial Intelligence

Applications involving humans and robots working together are spreading nowadays. Alongside, modeling and control techniques that allow physical Human-Robot Interaction (pHRI) are widely investigated. To better understand its potential application in pHRI, this work investigates the Cooperative Differential Game Theory modeling of pHRI in a cooperative reaching task, specifically for reference tracking. The proposed controller based on Collaborative Game Theory is deeply analyzed and compared in simulations with two other techniques, Linear Quadratic Regulator (LQR) and Non-Cooperative Game-Theoretic Controller. The set of simulations shows how different tuning of control parameters affects the system response and control efforts of both the players for the three controllers, suggesting the use of Cooperative GT in the case the robot should assist the human, while Non-Cooperative GT represents a better choice in the case the robot should lead the action. Finally, preliminary tests with a trained human are performed to extract useful information on the real applicability and limitations of the proposed method.


Computational Aspects of Cooperative Game Theory (Synthesis Lectures on Artificial Inetlligence and Machine Learning): Chalkiadakis, Georgios, Elkind, Edith, Wooldridge, Michael: 9781608456529: Amazon.com: Books

#artificialintelligence

This manuscript was a pleasure to discover, and a pleasure to read -- a broad, but succinct, overview of work in computational cooperative game theory. I will certainly use this text with my own students, both within courses and to provide comprehensive background for students in my research group. The authors have made a substantial contribution to the multiagent systems and algorithmic game theory communities.