The TIP of the Iceberg: Revealing a Hidden Class of Task-in-Prompt Adversarial Attacks on LLMs
Berezin, Sergey, Farahbakhsh, Reza, Crespi, Noel
–arXiv.org Artificial Intelligence
We present a novel class of jailbreak adversarial attacks on LLMs, termed Task-in-Prompt (TIP) attacks. Our approach embeds sequence-to-sequence tasks (e.g., cipher decoding, riddles, code execution) into the model's prompt to indirectly generate prohibited inputs. To systematically assess the effectiveness of these attacks, we introduce the PHRYGE benchmark. We demonstrate that our techniques successfully circumvent safeguards in six state-of-the-art language models, including GPT-4o and LLaMA 3.2. Our findings highlight critical weaknesses in current LLM safety alignments and underscore the urgent need for more sophisticated defence strategies. Warning: this paper contains examples of unethical inquiries used solely for research purposes.
arXiv.org Artificial Intelligence
Feb-4-2025
- Country:
- Asia > Thailand
- Europe > France (0.04)
- North America > United States
- New York > New York County > New York City (0.04)
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Government > Military (1.00)
- Information Technology > Security & Privacy (1.00)
- Technology: