Group size effects and collective misalignment in LLM multi-agent systems
Flint, Ariel, Aiello, Luca Maria, Pastor-Satorras, Romualdo, Baronchelli, Andrea
–arXiv.org Artificial Intelligence
Multi-agent systems of large language models (LLMs) are rapidly expanding across domains, introducing dynamics not captured by single-agent evaluations. Yet, existing work has mostly contrasted the behavior of a single agent with that of a collective of fixed size, leaving open a central question: how does group size shape dynamics? Here, we move beyond this dichotomy and systematically explore outcomes across the full range of group sizes. We focus on multi-agent misalignment, building on recent evidence that interacting LLMs playing a simple coordination game can generate collective biases absent in individual models. First, we show that collective bias is a deeper phenomenon than previously assessed: interaction can amplify individual biases, introduce new ones, or override model-level preferences. Second, we demonstrate that group size affects the dynamics in a non-linear way, revealing model-dependent dynamical regimes. Finally, we develop a mean-field analytical approach and show that, above a critical population size, simulations converge to deterministic predictions that expose the basins of attraction of competing equilibria. These findings establish group size as a key driver of multi-agent dynamics and highlight the need to consider population-level effects when deploying LLM-based systems at scale.
arXiv.org Artificial Intelligence
Oct-28-2025
- Country:
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Latvia > Lubāna Municipality
- Lubāna (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- Denmark > Capital Region
- North America > United States
- New York (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Leisure & Entertainment > Games (0.68)
- Technology: