An Assessment of Model-On-Model Deception
Heitkoetter, Julius, Gerovitch, Michael, Newhouse, Laker
–arXiv.org Artificial Intelligence
The trustworthiness of highly capable language models is put at risk when they are able to produce deceptive outputs. Moreover, when models are vulnerable to deception it undermines reliability. In this paper, we introduce a method to investigate complex, model-on-model deceptive scenarios. We create a dataset of over 10,000 misleading explanations by asking Llama-2 7B, 13B, 70B, and GPT-3.5 to justify the wrong answer for questions in the MMLU. We find that, when models read these explanations, they are all significantly deceived. Worryingly, models of all capabilities are successful at misleading others, while more capable models are only slightly better at resisting deception. We recommend the development of techniques to detect and defend against deception. Since the release of OpenAI's ChatGPT, large language models (LLMs) have revolutionized information accessibility by providing precise answers and supportive explanations to complex queries (Spatharioti et al., 2023; Caramancion, 2024; OpenAI, 2022). However, LLMs have also demonstrated a propensity to hallucinate explanations that are convincing but incorrect (Zhang et al., 2023; Walters & Wilder, 2023; Xu et al., 2024).
arXiv.org Artificial Intelligence
May-10-2024
- Country:
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Genre:
- Research Report > Experimental Study (1.00)
- Technology: