Emergent Convergence in Multi-Agent LLM Annotation
Parfenova, Angelina, Denzler, Alexander, Pfeffer, Juergen
–arXiv.org Artificial Intelligence
Large language models (LLMs) are increasingly deployed in collaborative settings, yet little is known about how they coordinate when treated as black-box agents. We simulate 7500 multi-agent, multi-round discussions in an inductive coding task, generating over 125000 utterances that capture both final annotations and their interactional histories. We introduce process-level metrics: code stability, semantic self-consistency, and lexical confidence alongside sentiment and convergence measures, to track coordination dynamics. To probe deeper alignment signals, we analyze the evolving geometry of output embeddings, showing that intrinsic dimensionality declines over rounds, suggesting semantic compression. The results reveal that LLM groups converge lexically and semantically, develop asymmetric influence patterns, and exhibit negotiation-like behaviors despite the absence of explicit role prompting. This work demonstrates how black-box interaction analysis can surface emergent coordination strategies, offering a scalable complement to internal probe-based interpretability methods.
arXiv.org Artificial Intelligence
Dec-2-2025
- Country:
- Asia > Thailand
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- Germany > Bavaria
- North America > United States
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- Virginia (0.04)
- New Mexico > Bernalillo County
- Genre:
- Research Report > New Finding (0.46)
- Technology: