Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems
Qi, Zhenting, Zhang, Hanlin, Xing, Eric, Kakade, Sham, Lakkaraju, Himabindu
–arXiv.org Artificial Intelligence
Retrieval-Augmented Generation (RAG) improves pre-trained models by incorporating external knowledge at test time to enable customized adaptation. We study the risk of datastore leakage in Retrieval-In-Context RAG Language Models (LMs). We show that an adversary can exploit LMs' instruction-following capabilities to easily extract text data verbatim from the datastore of RAG systems built with instruction-tuned LMs via prompt injection. The vulnerability exists for a wide range of modern LMs that span Llama2, Mistral/Mixtral, Vicuna, SOLAR, WizardLM, Qwen1.5, and Platypus2, and the exploitability exacerbates as the model size scales up. Extending our study to production RAG models GPTs, we design an attack that can cause datastore leakage with a 100% success rate on 25 randomly selected customized GPTs with at most 2 queries, and we extract text data verbatim at a rate of 41% from a book of 77,000 words and 3% from a corpus of 1,569,000 words by prompting the GPTs with only 100 queries generated by themselves.
arXiv.org Artificial Intelligence
Jun-20-2024
- Country:
- Europe (0.28)
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Government (0.68)
- Information Technology > Security & Privacy (1.00)
- Law (0.68)
- Technology: