Coercing LLMs to do and reveal (almost) anything
Geiping, Jonas, Stein, Alex, Shu, Manli, Saifullah, Khalid, Wen, Yuxin, Goldstein, Tom
–arXiv.org Artificial Intelligence
It has recently been shown that adversarial attacks on large language models (LLMs) can'jailbreak' the model into making harmful statements. In this work, we argue that the spectrum of adversarial attacks on LLMs is much larger than merely jailbreaking. We provide a broad overview of possible attack surfaces and attack goals. Based on a series of concrete examples, we discuss, categorize and systematize attacks that coerce varied unintended behaviors, such as misdirection, model control, denial-of-service, or data extraction. We analyze these attacks in controlled experiments, and find that many of them stem from the practice of pre-training LLMs with coding capabilities, as well as the continued existence of strange'glitch' tokens in common LLM vocabularies that should be removed for security reasons. We conclude that the spectrum of adversarial attacks on LLMs is much broader than previously thought, and that the security of these models must be addressed through a comprehensive understanding of their capabilities and limitations.")] Some figures and tables below contain profanity or offensive text.
arXiv.org Artificial Intelligence
Feb-21-2024
- Country:
- Europe > Germany
- Baden-Württemberg (0.14)
- North America > United States
- Maryland (0.14)
- Ohio (0.14)
- Pennsylvania (0.14)
- Europe > Germany
- Genre:
- Research Report > Experimental Study (0.54)
- Industry:
- Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Technology: