Secure Transformer Inference
Yuan, Mu, Zhang, Lan, Li, Xiang-Yang
–arXiv.org Artificial Intelligence
Applications of Transformer models are exploding, e.g., ChatGPT [1]. Security is critical to Transformer-based services, which determines whether applications can be scaled to privacy-sensitive areas like cloud copilot for proprietary code and documents [2]. Existing work [3, 4] studied this problem under the classic secure multi-party computing framework. Using encryption and decryption methods requires approximation of complex nonlinear layers and introduces heavy computational overhead. In this work, we propose a three-party protocol using permutation to protect both model parameters and user data without any approximation of Transformer models.
arXiv.org Artificial Intelligence
Nov-14-2023