Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions

Open in new window