Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs

Rando, Javier, Croce, Francesco, Mitka, Kryštof, Shabalin, Stepan, Andriushchenko, Maksym, Flammarion, Nicolas, Tramèr, Florian

arXiv.org Artificial Intelligence 

Large language models are aligned to be safe, preventing users from generating harmful content like misinformation or instructions for illegal activities. However, previous work has shown that the alignment process is vulnerable to poisoning attacks. Adversaries can manipulate the safety training data to inject backdoors that act like a universal sudo command: adding the backdoor string to any prompt enables harmful responses from models that, otherwise, behave safely. Our competition, co-located at IEEE SaTML 2024, challenged participants to find universal backdoors in several large language models.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found