Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy
Wu, Haoqi, Dai, Wei, Wang, Li, Yan, Qiang
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have gained significant popularity due to their remarkable capabilities in text understanding and generation. However, despite their widespread deployment in inference services such as ChatGPT, concerns about the potential leakage of sensitive user data have arisen. Existing solutions primarily rely on privacy-enhancing technologies to mitigate such risks, facing the trade-off among efficiency, privacy, and utility. To narrow this gap, we propose Cape, a context-aware prompt perturbation mechanism based on differential privacy, to enable efficient inference with an improved privacy-utility trade-off. Concretely, we introduce a hybrid utility function that better captures the token similarity. Additionally, we propose a bucketized sampling mechanism to handle large sampling space, which might lead to long-tail phenomenons. Extensive experiments across multiple datasets, along with ablation studies, demonstrate that Cape achieves a better privacy-utility trade-off compared to prior state-of-the-art works.
arXiv.org Artificial Intelligence
May-16-2025
- Country:
- Asia > Thailand
- Europe
- Austria > Vienna (0.14)
- Denmark > Capital Region
- Copenhagen (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- United States
- California
- Los Angeles County > Long Beach (0.04)
- Santa Clara County > Palo Alto (0.04)
- Massachusetts > Suffolk County
- Boston (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Rhode Island > Providence County
- Providence (0.04)
- California
- Canada > British Columbia
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: