GRPO-GCC: Enhancing Cooperation in Spatial Public Goods Games via Group Relative Policy Optimization with Global Cooperation Constraint
Yang, Zhaoqilin, Li, Chanchan, Liu, Tianqi, Zhao, Hongxin, Tian, Youliang
–arXiv.org Artificial Intelligence
Inspired by the principle of self-regulating cooperation in collective institutions, we propose the Group Relative Policy Optimization with Global Cooperation Constraint (GRPO-GCC) framework. This work is the first to introduce GRPO into spatial public goods games, establishing a new deep reinforcement learning baseline for structured populations. GRPO-GCC integrates group relative policy optimization with a global cooperation constraint that strengthens incentives at intermediate cooperation levels while weakening them at extremes. This mechanism aligns local decision making with sustainable collective outcomes and prevents collapse into either universal defection or unconditional cooperation. The framework advances beyond existing approaches by combining group-normalized advantage estimation, a reference-anchored KL penalty, and a global incentive term that dynamically adjusts cooperative payoffs. As a result, it achieves accelerated cooperation onset, stabilized policy adaptation, and long-term sustainability. GRPO-GCC demonstrates how a simple yet global signal can reshape incentives toward resilient cooperation, and provides a new paradigm for multi-agent reinforcement learning in socio-technical systems.
arXiv.org Artificial Intelligence
Oct-13-2025
- Country:
- Asia
- China
- Beijing > Beijing (0.05)
- Guizhou Province (0.14)
- Middle East > Jordan (0.04)
- China
- Europe > Germany (0.04)
- North America > United States
- Florida > Broward County
- Fort Lauderdale (0.04)
- New York > New York County
- New York City (0.04)
- Florida > Broward County
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government (0.46)
- Information Technology > Security & Privacy (0.46)
- Law (0.46)
- Social Sector (0.48)
- Technology: