Traceback of Poisoning Attacks to Retrieval-Augmented Generation
Zhang, Baolei, Xin, Haoran, Fang, Minghong, Liu, Zhuqing, Yi, Biao, Li, Tong, Liu, Zheli
–arXiv.org Artificial Intelligence
Large language models (LLMs) integrated with retrieval-augmented generation (RAG) systems improve accuracy by leveraging external knowledge sources. However, recent research has revealed RAG's susceptibility to poisoning attacks, where the attacker injects poisoned texts into the knowledge database, leading to attacker-desired responses. Existing defenses, which predominantly focus on inference-time mitigation, have proven insufficient against sophisticated attacks. In this paper, we introduce RAGForensics, the first traceback system for RAG, designed to identify poisoned texts within the knowledge database that are responsible for the attacks. RAGForensics operates iteratively, first retrieving a subset of texts from the database and then utilizing a specially crafted prompt to guide an LLM in detecting potential poisoning texts. Empirical evaluations across multiple datasets demonstrate the effectiveness of RAGForensics against state-of-the-art poisoning attacks. This work pioneers the traceback of poisoned texts in RAG systems, providing a practical and promising defense mechanism to enhance their security. Our code is available at: https://github.com/zhangbl6618/RAG-Responsibility-Attribution
arXiv.org Artificial Intelligence
Oct-21-2025
- Country:
- Asia
- China > Tianjin Province
- Tianjin (0.04)
- Middle East > Jordan (0.04)
- China > Tianjin Province
- North America > United States
- Illinois > Cook County
- Chicago (0.04)
- New York > New York County
- New York City (0.04)
- Texas > Denton County
- Denton (0.04)
- Illinois > Cook County
- Oceania > Australia
- New South Wales > Sydney (0.05)
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology > Security & Privacy (0.93)
- Technology: