LLMs Can't Handle Peer Pressure: Crumbling under Multi-Agent Social Interactions
Song, Maojia, Pala, Tej Deep, Zhou, Ruiwen, Jin, Weisheng, Zadeh, Amir, Li, Chuan, Herremans, Dorien, Poria, Soujanya
–arXiv.org Artificial Intelligence
Large language models (LLMs) are increasingly integrated into multi-agent systems (MAS), where peer interactions shape individual decisions. While prior work has mainly examined conformity bias, we broaden the view to include how LLMs build rapport from prior interactions, discern and integrate high-quality peer information, and resist misleading inputs-abilities essential for achieving collective intelligence under complex social dynamics. We introduce KAIROS, a benchmark that simulates quiz-style collaboration with peer agents whose rapport levels and behaviours can be precisely controlled in both historical interactions and the current round. This unified setup enables systematic analysis of how rapport, peer actions, and the model's self-confidence jointly influence decision-making. Using KAIROS, we evaluate prompting, supervised fine-tuning, and reinforcement learning via Group Relative Policy Optimisation (GRPO). Results show that model scale is a primary factor moderating susceptibility to social influence: larger models are more resilient and benefit from prompting-based mitigation, whereas smaller models remain vulnerable. Only carefully configured GRPO training yields consistent robustness and performance gains for small models.
arXiv.org Artificial Intelligence
Dec-10-2025
- Country:
- Asia > Singapore (0.04)
- Europe > France (0.04)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Technology: