Chakrabarti, Chaitali
Model Extraction Attacks on Split Federated Learning
Li, Jingtao, Rakin, Adnan Siraj, Chen, Xing, Yang, Li, He, Zhezhi, Fan, Deliang, Chakrabarti, Chaitali
Federated Learning (FL) is a popular collaborative learning scheme involving multiple clients and a server. FL focuses on protecting clients' data but turns out to be highly vulnerable to Intellectual Property (IP) threats. Since FL periodically collects and distributes the model parameters, a free-rider can download the latest model and thus steal model IP. Split Federated Learning (SFL), a recent variant of FL that supports training with resource-constrained clients, splits the model into two, giving one part of the model to clients (client-side model), and the remaining part to the server (server-side model). Thus SFL prevents model leakage by design. Moreover, by blocking prediction queries, it can be made resistant to advanced IP threats such as traditional Model Extraction (ME) attacks. While SFL is better than FL in terms of providing IP protection, it is still vulnerable. In this paper, we expose the vulnerability of SFL and show how malicious clients can launch ME attacks by querying the gradient information from the server side. We propose five variants of ME attack which differs in the gradient usage as well as in the data assumptions. We show that under practical cases, the proposed ME attacks work exceptionally well for SFL. For instance, when the server-side model has five layers, our proposed ME attack can achieve over 90% accuracy with less than 2% accuracy degradation with VGG-11 on CIFAR-10.
Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Krishnan, Gokul, Mandal, Sumit K., Chakrabarti, Chaitali, Seo, Jae-sun, Ogras, Umit Y., Cao, Yu
With the widespread use of Deep Neural Networks (DNNs), machine learning algorithms have evolved in two diverse directions -- one with ever-increasing connection density for better accuracy and the other with more compact sizing for energy efficiency. The increase in connection density increases on-chip data movement, which makes efficient on-chip communication a critical function of the DNN accelerator. The contribution of this work is threefold. First, we illustrate that the point-to-point (P2P)-based interconnect is incapable of handling a high volume of on-chip data movement for DNNs. Second, we evaluate P2P and network-on-chip (NoC) interconnect (with a regular topology such as a mesh) for SRAM- and ReRAM-based in-memory computing (IMC) architectures for a range of DNNs. This analysis shows the necessity for the optimal interconnect choice for an IMC DNN accelerator. Finally, we perform an experimental evaluation for different DNNs to empirically obtain the performance of the IMC architecture with both NoC-tree and NoC-mesh. We conclude that, at the tile level, NoC-tree is appropriate for compact DNNs employed at the edge, and NoC-mesh is necessary to accelerate DNNs with high connection density. Furthermore, we propose a technique to determine the optimal choice of interconnect for any given DNN. In this technique, we use analytical models of NoC to evaluate end-to-end communication latency of any given DNN. We demonstrate that the interconnect optimization in the IMC architecture results in up to 6$\times$ improvement in energy-delay-area product for VGG-19 inference compared to the state-of-the-art ReRAM-based IMC architectures.
T-BFA: Targeted Bit-Flip Adversarial Weight Attack
Rakin, Adnan Siraj, He, Zhezhi, Li, Jingtao, Yao, Fan, Chakrabarti, Chaitali, Fan, Deliang
Traditional Deep Neural Network (DNN) security is mostly related to the well-known adversarial input example attack. Recently, another dimension of adversarial attack, namely, attack on DNN weight parameters, has been shown to be very powerful. As a representative one, the Bit-Flip based adversarial weight Attack(BFA) injects an extremely small amount of fault into weight parameters to hijack the DNN function. Prior works on BFA are focused on-targeted attacks that can classify all inputs into a random output class by flipping a very small number of weight bits stored in computer memory. This paper proposes the first work oftargetedBFA based (T-BFA) adversarial weight attack on DNN models, which can intentionally mislead selected inputs to a target output class. The objectives achieved by identifying the weight bits that are highly associated with the classification of a targeted output through a novel class-dependent weight bit ranking algorithm. T-BFA performance has been successfully demonstrated on multiple network architectures for the image classification task. For example, by merely flipping 27 out of 88 million weight bits, T-BFA can misclassify all the imagesfrom 'Ibex' class into 'Proboscis Monkey' class (i.e., 100% attack success rate)in ImageNet dataset, while maintaining 59.35% validation accuracy on ResNet-18. Moreover, we successfully demonstrate our T-BFA attack in a real computer prototype system running DNN computation.