Certifying LLM Safety against Adversarial Prompting

Open in new window