Design Patterns for Securing LLM Agents against Prompt Injections

Open in new window