Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context

Open in new window