cipher character
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.93)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (0.93)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Security & Privacy (0.93)
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.93)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (0.93)
- (2 more...)
Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters
Large Language Models (LLMs) are typically harmless but remain vulnerable to carefully crafted prompts known as jailbreaks'', which can bypass protective measures and induce harmful behavior. Recent advancements in LLMs have incorporated moderation guardrails that can filter outputs, which trigger processing errors for certain malicious questions. Existing red-teaming benchmarks often neglect to include questions that trigger moderation guardrails, making it difficult to evaluate jailbreak effectiveness. To address this issue, we introduce JAMBench, a harmful behavior benchmark designed to trigger and evaluate moderation guardrails. JAMBench involves 160 manually crafted instructions covering four major risk categories at multiple severity levels.
Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters
Jin, Haibo, Zhou, Andy, Menke, Joe D., Wang, Haohan
Large Language Models (LLMs) are typically harmless but remain vulnerable to carefully crafted prompts known as ``jailbreaks'', which can bypass protective measures and induce harmful behavior. Recent advancements in LLMs have incorporated moderation guardrails that can filter outputs, which trigger processing errors for certain malicious questions. Existing red-teaming benchmarks often neglect to include questions that trigger moderation guardrails, making it difficult to evaluate jailbreak effectiveness. To address this issue, we introduce JAMBench, a harmful behavior benchmark designed to trigger and evaluate moderation guardrails. JAMBench involves 160 manually crafted instructions covering four major risk categories at multiple severity levels. Furthermore, we propose a jailbreak method, JAM (Jailbreak Against Moderation), designed to attack moderation guardrails using jailbreak prefixes to bypass input-level filters and a fine-tuned shadow model functionally equivalent to the guardrail model to generate cipher characters to bypass output-level filters. Our extensive experiments on four LLMs demonstrate that JAM achieves higher jailbreak success ($\sim$ $\times$ 19.88) and lower filtered-out rates ($\sim$ $\times$ 1/6) than baselines.
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (0.94)
- (2 more...)