In-Browser LLM-Guided Fuzzing for Real-Time Prompt Injection Testing in Agentic AI Browsers
–arXiv.org Artificial Intelligence
AI-powered browser assistants (also known as autonomous browsing agents or agentic AI browsers) are emerging tools that use LLMs to help users navigate and interact with web content. For example, an AI agent can be instructed to summarize a webpage or perform actions like clicking links and filling forms on behalf of the user. While these agents promise enhanced productivity, they also introduce new security risks. One major risk is prompt injection, where an attacker embeds malicious instructions into web content that the agent will process [5]. Crucially, such instructions can be hidden from the human user (e.g., invisible text, HTML comments) yet still parsed by the LLM, causing it to alter its behavior in unintended ways [10]. In effect, the agent can be tricked into executing the attacker's commands rather than the user's, leading to potentially severe consequences [2]. Indirect prompt injections have been demonstrated in real-world scenarios.
arXiv.org Artificial Intelligence
Oct-16-2025
- Country:
- North America > United States (0.04)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Energy (0.67)
- Information Technology > Security & Privacy (1.00)
- Technology: