Silence is Not Consensus: Disrupting Agreement Bias in Multi-Agent LLMs via Catfish Agent for Clinical Decision Making
Wang, Yihan, Yan, Qiao, Xing, Zhenghao, Liu, Lihao, He, Junjun, Fu, Chi-Wing, Hu, Xiaowei, Heng, Pheng-Ann
–arXiv.org Artificial Intelligence
Large language models (LLMs) have demonstrated strong potential in clinical question answering, with recent multi-agent frameworks further improving diagnostic accuracy via collaborative reasoning. However, we identify a recurring issue of Silent Agreement, where agents prematurely converge on diagnoses without sufficient critical analysis, particularly in complex or ambiguous cases. We present a new concept called Catfish Agent, a role-specialized LLM designed to inject structured dissent and counter silent agreement. Inspired by the ``catfish effect'' in organizational psychology, the Catfish Agent is designed to challenge emerging consensus to stimulate deeper reasoning. We formulate two mechanisms to encourage effective and context-aware interventions: (i) a complexity-aware intervention that modulates agent engagement based on case difficulty, and (ii) a tone-calibrated intervention articulated to balance critique and collaboration. Evaluations on nine medical Q&A and three medical VQA benchmarks show that our approach consistently outperforms both single- and multi-agent LLMs frameworks, including leading commercial models such as GPT-4o and DeepSeek-R1.
arXiv.org Artificial Intelligence
May-28-2025
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Health & Medicine
- Diagnostic Medicine (1.00)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area
- Hepatology (0.68)
- Immunology (1.00)
- Infections and Infectious Diseases (1.00)
- Vaccines (0.94)
- Health & Medicine
- Technology: