Goto

Collaborating Authors

 evil character


SequentialBreak: Large Language Models Can be Fooled by Embedding Jailbreak Prompts into Sequential Prompt Chains

Saiem, Bijoy Ahmed, Shanto, MD Sadik Hossain, Ahsan, Rakib, Rashid, Md Rafi ur

arXiv.org Artificial Intelligence

As the integration of the Large Language Models (LLMs) into various applications increases, so does their susceptibility to misuse, raising significant security concerns. Numerous jailbreak attacks have been proposed to assess the security defense of LLMs. Current jailbreak attacks mainly rely on scenario camouflage, prompt obfuscation, prompt optimization, and prompt iterative optimization to conceal malicious prompts. In particular, sequential prompt chains in a single query can lead LLMs to focus on certain prompts while ignoring others, facilitating context manipulation. This paper introduces SequentialBreak, a novel jailbreak attack that exploits this vulnerability. We discuss several scenarios, not limited to examples like Question Bank, Dialog Completion, and Game Environment, where the harmful prompt is embedded within benign ones that can fool LLMs into generating harmful responses. The distinct narrative structures of these scenarios show that SequentialBreak is flexible enough to adapt to various prompt formats beyond those discussed. Extensive experiments demonstrate that SequentialBreak uses only a single query to achieve a substantial gain of attack success rate over existing baselines against both open-source and closed-source models. Through our research, we highlight the urgent need for more robust and resilient safeguards to enhance LLM security and prevent potential misuse. All the result files and website associated with this research are available in this GitHub repository: https://anonymous.4open.science/r/JailBreakAttack-4F3B/.


Designing Machine Learning Models – Airbnb Engineering & Data Science

#artificialintelligence

At Airbnb, we are focused on creating a place where people can belong anywhere. Part of that sense of belonging comes from trust amongst our users and knowing that their safety is our utmost concern. While the vast majority of our community is made up of friendly and trustworthy hosts and guests, there exists a tiny group of users who try to take advantage of our site. These are very rare occurrences, but nevertheless, this is where the Trust and Safety team comes in. The Trust and Safety team deals with any type of fraud that might happen on our platform. It is our main objective to try to protect our users and the company from various types of risks.