When Persuasion Overrides Truth in Multi-Agent LLM Debates: Introducing a Confidence-Weighted Persuasion Override Rate (CW-POR)
Agarwal, Mahak, Khanna, Divyam
–arXiv.org Artificial Intelligence
When Persuasion Overrides Truth in Multi-Agent LLM Debates: Introducing a Confidence-Weighted Persuasion Override Rate (CW-POR) Mahak Agarwal Divyam Khanna Independent Researcher Independent Researcher agarwalmahak13@gmail.com divyamkhanna13@gmail.com Abstract In many real-world scenarios, a single Large Language Model (LLM) may encounter contradictory claims--some accurate, others forcefully incorrect--and must judge which is true. We investigate this risk in a single-turn, multi-agent debate framework: one LLM-based agent provides a factual answer from TruthfulQA, another vigorously defends a falsehood, and the same LLM architecture serves as judge. We introduce the Confidence-Weighted Persuasion Override Rate (CW-POR), which captures not only how often the judge is deceived but also how strongly it believes the incorrect choice. Our experiments on five open-source LLMs (3B-14B parameters), where we systematically vary agent verbosity (30-300 words), reveal that even smaller models can craft persuasive arguments that override truthful answers--often with high confidence. These findings underscore the importance of robust calibration and adversarial testing to prevent LLMs from confidently endorsing misinformation. 1 Introduction Large Language Models (LLMs) have made significant strides in natural language processing tasks, powering applications like question answering, text generation, and content summarization. Yet, they also present new challenges: modern LLMs, trained on massive amounts of web text, can inadvertently reproduce misinformation with a veneer of fluency and authority.
arXiv.org Artificial Intelligence
Mar-31-2025
- Country:
- Europe > Spain
- North America > United States
- New York (0.04)
- Genre:
- Research Report > New Finding (0.66)
- Technology: