passport layer
- North America > Canada (0.04)
- Asia > Malaysia > Kuala Lumpur > Kuala Lumpur (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
We are grateful to reviewers for the constructive comments, which help to improve the quality & clarity of the paper
We are grateful to reviewers for the constructive comments, which help to improve the quality & clarity of the paper. Figure 1: Test accuracy on CIFAR100 as suggested by R1 (i.e. In summary, when ambiguous passports are forged and used ( e.g. We will include above results to the final draft. V1 V2 V3 Training - Passport layers added - Passports needed - 15-30% more training time - Passport layers added - Passports needed - 100-125% more training time - Passport layers added - Passports needed - Trigger set needed - 100-150% more training time Inferencing - Passport layers & passports needed - 10% more inferencing time - Passport layers & passport NOT needed NO extra time incurred - Passport layers & passport NOT needed NO extra time incurred V erification - NO separate verification needed - Passport layers & passports needed - Trigger set needed (black-box verification) - Passport layers & passports needed (white-box verification)Table 2: Summary of network complexity for V1, V2 and V3 schemes.
Federated Deep Learning with Bayesian Privacy
Gu, Hanlin, Fan, Lixin, Li, Bowen, Kang, Yan, Yao, Yuan, Yang, Qiang
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users. For Federated Learning of Deep Neural Network with billions of model parameters, existing privacy-preserving solutions are unsatisfactory. Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads rendering it almost useless in practice . Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity. However, DP is vulnerable to aggressive Bayesian restoration attacks as disclosed in the literature and demonstrated in experimental results of this work. To address the aforementioned perplexity, we propose a novel Bayesian Privacy (BP) framework which enables Bayesian restoration attacks to be formulated as the probability of reconstructing private data from observed public information. Specifically, the proposed BP framework accurately quantifies privacy loss by Kullback-Leibler (KL) Divergence between the prior distribution about the privacy data and the posterior distribution of restoration private data conditioning on exposed information}. To our best knowledge, this Bayesian Privacy analysis is the first to provides theoretical justification of secure privacy-preserving capabilities against Bayesian restoration attacks. As a concrete use case, we demonstrate that a novel federated deep learning method using private passport layers is able to simultaneously achieve high model performance, privacy-preserving capability and low computational complexity. Theoretical analysis is in accordance with empirical measurements of information leakage extensively experimented with a variety of DNN networks on image classification MNIST, CIFAR10, and CIFAR100 datasets.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > China > Hong Kong (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- (2 more...)