The Dark Side of LLMs: Agent-based Attacks for Complete Computer Takeover
Lupinacci, Matteo, Pironti, Francesco Aurelio, Blefari, Francesco, Romeo, Francesco, Arena, Luigi, Furfaro, Angelo
–arXiv.org Artificial Intelligence
The rapid adoption of Large Language Model (LLM) agents and multi-agent systems enables remarkable capabilities in natural language processing and generation. However, these systems introduce security vulnerabilities that extend beyond traditional content generation to system-level compromises. This paper presents a comprehensive evaluation of the LLMs security used as reasoning engines within autonomous agents, highlighting how they can be exploited as attack vectors capable of achieving computer takeovers. We focus on how different attack surfaces and trust boundaries can be leveraged to orchestrate such takeovers. We demonstrate that adversaries can effectively coerce popular LLMs into autonomously installing and executing malware on victim machines. Our evaluation of 18 state-of-the-art LLMs reveals an alarming scenario: 94.4% of models succumb to Direct Prompt Injection, and 83.3% are vulnerable to the more stealthy and evasive RAG Backdoor Attack. Notably, we tested trust boundaries within multi-agent systems, where LLM agents interact and influence each other, and we revealed that LLMs which successfully resist direct injection or RAG backdoor attacks will execute identical payloads when requested by peer agents. We found that 100.0% of tested LLMs can be compromised through Inter-Agent Trust Exploitation attacks, and that every model exhibits context-dependent security behaviors that create exploitable blind spots.
arXiv.org Artificial Intelligence
Nov-5-2025
- Country:
- Asia
- Europe > Italy
- Calabria (0.04)
- North America
- Dominican Republic (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- Maryland > Baltimore (0.04)
- New York > New York County
- New York City (0.04)
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: