Persuasion Dynamics in LLMs: Investigating Robustness and Adaptability in Knowledge and Safety with DuET-PD
Tan, Bryan Chen Zhengyu, Chin, Daniel Wai Kit, Liu, Zhengyuan, Chen, Nancy F., Lee, Roy Ka-Wei
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) can struggle to balance gullibility to misinformation and resistance to valid corrections in persuasive dialogues, a critical challenge for reliable deployment. We introduce DuET-PD (Dual Evaluation for Trust in Persuasive Dialogues), a framework evaluating multi-turn stance-change dynamics across dual dimensions: persuasion type (corrective/misleading) and domain (knowledge via MMLU-Pro, and safety via SALAD-Bench). We find that even a state-of-the-art model like GPT-4o achieves only 27.32% accuracy in MMLU-Pro under sustained misleading persuasions. Moreover, results reveal a concerning trend of increasing sycophancy in newer open-source models. To address this, we introduce Holistic DPO, a training approach balancing positive and negative persuasion examples. Unlike prompting or resist-only training, Holistic DPO enhances both robustness to misinformation and receptiveness to corrections, improving Llama-3.1-8B-Instruct's accuracy under misleading persuasion in safety contexts from 4.21% to 76.54%. These contributions offer a pathway to developing more reliable and adaptable LLMs for multi-turn dialogue. Code is available at https://github.com/Social-AI-Studio/DuET-PD.
arXiv.org Artificial Intelligence
Sep-10-2025
- Country:
- Asia (1.00)
- Europe (0.93)
- North America > United States
- California > San Mateo County (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Banking & Finance (0.93)
- Education (0.93)
- Government > Regional Government
- Health & Medicine > Therapeutic Area
- Psychiatry/Psychology (0.68)
- Information Technology (0.68)
- Law (1.00)
- Law Enforcement & Public Safety (0.68)
- Technology: