Spin: An Efficient Secure Computation Framework with GPU Acceleration
Jiang, Wuxuan, Song, Xiangjun, Hong, Shenbai, Zhang, Haijun, Liu, Wenxin, Zhao, Bo, Xu, Wei, Li, Yi
–arXiv.org Artificial Intelligence
Accuracy and efficiency remain challenges for multi-party computation (MPC) frameworks. Spin is a GPU-accelerated MPC framework that supports multiple computation parties and a dishonest majority adversarial setup. We propose optimized protocols for non-linear functions that are critical for machine learning, as well as several novel optimizations specific to attention that is the fundamental unit of Transformer models, allowing Spin to perform non-trivial CNNs training and Transformer inference without sacrificing security. At the backend level, Spin leverages GPU, CPU, and RDMA-enabled smart network cards for acceleration. Comprehensive evaluations demonstrate that Spin can be up to $2\times$ faster than the state-of-the-art for deep neural network training. For inference on a Transformer model with 18.9 million parameters, our attention-specific optimizations enable Spin to achieve better efficiency, less communication, and better accuracy.
arXiv.org Artificial Intelligence
Feb-3-2024
- Country:
- Europe (0.67)
- North America > United States
- California > Santa Barbara County > Santa Barbara (0.14)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: