Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks

Open in new window