FedAdOb: Privacy-Preserving Federated Deep Learning with Adaptive Obfuscation

Gu, Hanlin, Luo, Jiahuan, Kang, Yan, Yao, Yuan, Zhu, Gongxi, Li, Bowen, Fan, Lixin, Yang, Qiang

arXiv.org Artificial Intelligence 

Abstract--Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data. The concern about privacy leakage, albeit demonstrated under specific conditions [1], has triggered numerous follow-up research in designing powerful attacking methods and effective defending mechanisms aiming to thwart these attacking methods. Nevertheless, privacy-preserving mechanisms employed in these defending methods invariably lead to compromised model performances due to a fixed obfuscation applied to private data or gradients. In this article, we, therefore, propose a novel adaptive obfuscation mechanism, coined FedAdOb, to protect private data without yielding original model performances. T echnically, FedAdOb utilizes passport-based adaptive obfuscation to ensure data privacy in both horizontal and vertical federated learning settings. The privacy-preserving capabilities of FedAdOb, specifically with regard to private features and labels, are theoretically proven through Theorems 1 and 2. Furthermore, extensive experimental evaluations conducted on various datasets and network architectures demonstrate the effectiveness of FedAdOb by manifesting its superior trade-off between privacy preservation and model performance, surpassing existing methods. Federated Learning (FL) offers a privacy-preserving framework that allows multiple organizations to jointly build global models without disclosing private datasets [2], [3], [4], [5]. Two distinct paradigms have been proposed in the context of FL [5]: Horizontal Federated Learning (HFL) and V ertical Federated Learning (VFL). HFL focuses on scenarios where multiple entities have similar features but different samples. It is suitable for cases where data sources are distributed, such as healthcare institutions contributing patient data for disease prediction. On the other hand, VFL addresses situations where entities hold different attributes or features of the same samples. This approach is useful in scenarios like combining demographic information from banks with call records from telecom companies to predict customer behavior. Since the introduction of HFL and VFL, studies have highlighted the existence of privacy risks in specific scenarios.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found