Ran, Ran
LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Peng, Hongwu, Ran, Ran, Luo, Yukui, Zhao, Jiahui, Huang, Shaoyi, Thorat, Kiran, Geng, Tong, Wang, Chenghong, Xu, Xiaolin, Wen, Wujie, Ding, Caiwen
The growth of Graph Convolution Network (GCN) model sizes has revolutionized numerous applications, surpassing human performance in areas such as personal healthcare and financial systems. The deployment of GCNs in the cloud raises privacy concerns due to potential adversarial attacks on client data. To address security concerns, Privacy-Preserving Machine Learning (PPML) using Homomorphic Encryption (HE) secures sensitive client data. However, it introduces substantial computational overhead in practical applications. To tackle those challenges, we present LinGCN, a framework designed to reduce multiplication depth and optimize the performance of HE based GCN inference. LinGCN is structured around three key elements: (1) A differentiable structural linearization algorithm, complemented by a parameterized discrete indicator function, co-trained with model weights to meet the optimization goal. This strategy promotes fine-grained node-level non-linear location selection, resulting in a model with minimized multiplication depth. (2) A compact node-wise polynomial replacement policy with a second-order trainable activation function, steered towards superior convergence by a two-level distillation approach from an all-ReLU based teacher model. (3) an enhanced HE solution that enables finer-grained operator fusion for node-wise activation functions, further reducing multiplication level consumption in HE-based inference. Our experiments on the NTU-XVIEW skeleton joint dataset reveal that LinGCN excels in latency, accuracy, and scalability for homomorphically encrypted inference, outperforming solutions such as CryptoGCN. Remarkably, LinGCN achieves a 14.2x latency speedup relative to CryptoGCN, while preserving an inference accuracy of 75% and notably reducing multiplication depth.
DDRF: Denoising Diffusion Model for Remote Sensing Image Fusion
Cao, ZiHan, Cao, ShiQi, Wu, Xiao, Hou, JunMing, Ran, Ran, Deng, Liang-Jian
Denosing diffusion model, as a generative model, has received a lot of attention in the field of image generation recently, thanks to its powerful generation capability. However, diffusion models have not yet received sufficient research in the field of image fusion. In this article, we introduce diffusion model to the image fusion field, treating the image fusion task as image-to-image translation and designing two different conditional injection modulation modules (i.e., style transfer modulation and wavelet modulation) to inject coarse-grained style information and fine-grained high-frequency and low-frequency information into the diffusion UNet, thereby generating fused images. In addition, we also discussed the residual learning and the selection of training objectives of the diffusion model in the image fusion task. Extensive experimental results based on quantitative and qualitative assessments compared with benchmarks demonstrates state-of-the-art results and good generalization performance in image fusion tasks. Finally, it is hoped that our method can inspire other works and gain insight into this field to better apply the diffusion model to image fusion tasks. Code shall be released for better reproducibility.
RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference
Peng, Hongwu, Zhou, Shanglin, Luo, Yukui, Xu, Nuo, Duan, Shijin, Ran, Ran, Zhao, Jiahui, Huang, Shaoyi, Xie, Xi, Wang, Chenghong, Geng, Tong, Wen, Wujie, Xu, Xiaolin, Ding, Caiwen
In this work, we propose a novel approach, Machine-Learning-as-a-Service (MLaaS) has emerged as the ReLU-Reduced Neural Architecture Search a popular solution for accelerating inference in various applications (RRNet) framework, that jointly optimizes the structure of [1]-[11]. The challenges of MLaaS comes from the deep neural network (DNN) model and the hardware several folds: inference latency and privacy. To accelerate the architecture to support high-performance MPC-based PI. Our MLaaS training and inference application, accelerated gradient framework eliminates the need for manual heuristic analysis sparsification [12], [13] and model compression methods [14]- by automating the process of exploring the design space [22] are proposed. On the other side, a major limitation of and identifying the optimal configuration of DNN models MLaaS is the requirement for clients to reveal raw input data and hardware architectures for 2PC-based PI. We use FPGA to the service provider, which may compromise the privacy accelerator design as a demonstration and summarize our of users. This issue has been highlighted in previous studies contributions: such as [23]. In this work, we aim to address this challenge 1) We propose a novel approach to addressing the high by proposing a novel approach for privacy-preserving MLaaS.
PolyMPCNet: Towards ReLU-free Neural Architecture Search in Two-party Computation Based Private Inference
Peng, Hongwu, Zhou, Shanglin, Luo, Yukui, Duan, Shijin, Xu, Nuo, Ran, Ran, Huang, Shaoyi, Wang, Chenghong, Geng, Tong, Li, Ang, Wen, Wujie, Xu, Xiaolin, Ding, Caiwen
The rapid growth and deployment of deep learning (DL) has witnessed emerging privacy and security concerns. To mitigate these issues, secure multi-party computation (MPC) has been discussed, to enable the privacy-preserving DL computation. In practice, they often come at very high computation and communication overhead, and potentially prohibit their popularity in large scale systems. Two orthogonal research trends have attracted enormous interests in addressing the energy efficiency in secure deep learning, i.e., overhead reduction of MPC comparison protocol, and hardware acceleration. However, they either achieve a low reduction ratio and suffer from high latency due to limited computation and communication saving, or are power-hungry as existing works mainly focus on general computing platforms such as CPUs and GPUs. In this work, as the first attempt, we develop a systematic framework, PolyMPCNet, of joint overhead reduction of MPC comparison protocol and hardware acceleration, by integrating hardware latency of the cryptographic building block into the DNN loss function to achieve high energy efficiency, accuracy, and security guarantee. Instead of heuristically checking the model sensitivity after a DNN is well-trained (through deleting or dropping some non-polynomial operators), our key design principle is to em enforce exactly what is assumed in the DNN design -- training a DNN that is both hardware efficient and secure, while escaping the local minima and saddle points and maintaining high accuracy. More specifically, we propose a straight through polynomial activation initialization method for cryptographic hardware friendly trainable polynomial activation function to replace the expensive 2P-ReLU operator. We develop a cryptographic hardware scheduler and the corresponding performance model for Field Programmable Gate Arrays (FPGA) platform.
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Xu, Nuo, Wang, Binghui, Ran, Ran, Wen, Wujie, Venkitasubramaniam, Parv
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risks for the training dataset used in the model training. In this paper, we propose a novel and effective Neuron-Guided Defense method named NeuGuard against membership inference attacks (MIAs). We identify a key weakness in existing defense mechanisms against MIAs wherein they cannot simultaneously defend against two commonly used neural network based MIAs, indicating that these two attacks should be separately evaluated to assure the defense effectiveness. We propose NeuGuard, a new defense approach that jointly controls the output and inner neurons' activation with the object to guide the model output of training set and testing set to have close distributions. NeuGuard consists of class-wise variance minimization targeting restricting the final output neurons and layer-wise balanced output control aiming to constrain the inner neurons in each layer. We evaluate NeuGuard and compare it with state-of-the-art defenses against two neural network based MIAs, five strongest metric based MIAs including the newly proposed label-only MIA on three benchmark datasets. Results show that NeuGuard outperforms the state-of-the-art defenses by offering much improved utility-privacy trade-off, generality, and overhead.