Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
–Neural Information Processing Systems
Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt.
Neural Information Processing Systems
Dec-27-2025, 05:02:39 GMT
- Technology: