Many-to-One Adversarial Consensus: Exposing Multi-Agent Collusion Risks in AI-Based Healthcare
Bashir, Adeela, han, The Anh, Shamszaman, Zia Ush
–arXiv.org Artificial Intelligence
Abstract--The integration of large language models (LLMs) into healthcare IoT systems promises faster decisions and improved medical support. LLMs are also deployed as multi-agent teams to assist AI doctors by debating, voting, or advising on decisions. However, when multiple assistant agents interact, coordinated adversaries can collude to create false consensus, pushing an AI doctor toward harmful prescriptions. We develop an experimental framework with scripted and unscripted doctor agents, adversarial assistants, and a verifier agent that checks decisions against clinical guidelines. Using 50 representative clinical questions, we find that collusion drives the Attack Success Rate (ASR) and Harmful Recommendation Rates (HRR) up to 100% in unprotected systems. This work provides the first systematic evidence of collusion risk in AI healthcare and demonstrates a practical, lightweight defence that ensures guideline fidelity. Artificial intelligence (AI) is increasingly integrated into healthcare IoT systems, supporting tasks such as remote patient monitoring, diagnosis, and treatment recommendations. In this setting, ensuring the security and trustworthiness of AI decisions is critical, since medical errors caused by unsafe recommendations can severely harm patients [1]. However, AI doctors and LLM-based clinical decision agents face multiple vulnerabilities.
arXiv.org Artificial Intelligence
Dec-4-2025