Confidential Prompting: Protecting User Prompts from Cloud LLM Providers
Gim, In, Li, Caihua, Zhong, Lin
–arXiv.org Artificial Intelligence
Our work tackles the challenge of securing user inputs in cloud-hosted large language model (LLM) serving while ensuring output invariance, model confidentiality, and compute efficiency. We introduce secure multi-party decoding (SMD), which leverages confidential computing to confine user prompts to a trusted execution environment (TEE), namely a confidential virtual machine (CVM), while allowing service providers to generate tokens efficiently. We also introduce a novel cryptographic method, prompt obfuscation (PO), to ensure robustness against reconstruction attacks on SMD. We demonstrate that our approach preserves both prompt confidentiality and LLM serving efficiency. Our solution can enable privacy-preserving cloud LLM serving that handles sensitive prompts, such as clinical records, financial data, and personal information.
arXiv.org Artificial Intelligence
Nov-28-2024
- Country:
- Asia > Japan
- Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Europe
- Croatia > Dubrovnik-Neretva County
- Dubrovnik (0.04)
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Croatia > Dubrovnik-Neretva County
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Japan
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine > Health Care Technology
- Medical Record (0.34)
- Information Technology
- Security & Privacy (1.00)
- Services (0.93)
- Law (0.93)
- Health & Medicine > Health Care Technology
- Technology: