Teach LLMs to Phish: Stealing Private Information from Language Models
Panda, Ashwinee, Choquette-Choo, Christopher A., Zhang, Zhengming, Yang, Yaoqing, Mittal, Prateek
–arXiv.org Artificial Intelligence
When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new practical data extraction attack that we call "neural phishing". This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of 10% attack success rates, at times, as high as 50%. Our attack assumes only that an adversary can insert as few as 10s of benign-appearing sentences into the training dataset using only vague priors on the structure of the user data. Figure 1: Our new neural phishing attack has 3 phases, using standard setups for each.
arXiv.org Artificial Intelligence
Mar-1-2024
- Country:
- Asia
- Afghanistan > Parwan Province
- Charikar (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Afghanistan > Parwan Province
- Europe > Italy (0.04)
- North America > United States
- California (0.04)
- New York > New York County
- New York City (0.14)
- South America > Chile
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: