Privacy-Preserving In-Context Learning for Large Language Models
Wu, Tong, Panda, Ashwinee, Wang, Jiachen T., Mittal, Prateek
–arXiv.org Artificial Intelligence
In-context learning (ICL) is an important capability of Large Language Models (LLMs), enabling these models to dynamically adapt based on specific, in-context exemplars, thereby improving accuracy and relevance. However, LLM's responses may leak the sensitive private information contained in in-context exemplars. To address this challenge, we propose Differentially Private In-context Learning (DP-ICL), a general paradigm for privatizing ICL tasks. The key idea for DP-ICL paradigm is generating differentially private responses through a noisy consensus among an ensemble of LLM's responses based on disjoint exemplar sets. Based on the general paradigm of DP-ICL, we instantiate several techniques showing how to privatize ICL for text classification and language generation. We evaluate DP-ICL on four text classification benchmarks and two language generation tasks, and our empirical results show that DP-ICL achieves a strong utility-privacy tradeoff.
arXiv.org Artificial Intelligence
Sep-30-2023
- Country:
- Asia (0.67)
- Genre:
- Research Report > New Finding (0.87)
- Industry:
- Health & Medicine (0.92)
- Information Technology > Security & Privacy (1.00)
- Technology: