Jailbreaking LLMs via Semantically Relevant Nested Scenarios with Targeted Toxic Knowledge

Open in new window