vertical federated learning
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > Italy (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > United States (0.14)
- Europe > Germany (0.14)
- Asia > Singapore (0.05)
- Asia > China > Beijing > Beijing (0.04)
- Banking & Finance (1.00)
- Information Technology > Security & Privacy (0.46)
- Leisure & Entertainment > Games (0.46)
- North America > United States > Virginia (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada (0.04)
- (2 more...)
- North America > United States > Washington > King County > Redmond (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > Illinois (0.04)
- (2 more...)
Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data
Federated Learning (FL) is an evolving paradigm that enables multiple parties to collaboratively train models without sharing raw data. Among its variants, Vertical Federated Learning (VFL) is particularly relevant in real-world, cross-organizational collaborations, where distinct features of a shared instance group are contributed by different parties. In these scenarios, parties are often linked using fuzzy identifiers, leading to a common practice termed as . Existing models generally address either multi-party VFL or fuzzy VFL between two parties. Extending these models to practical multi-party fuzzy VFL typically results in significant performance degradation and increased costs for maintaining privacy.
CAFE: Catastrophic Data Leakage in Vertical Federated Learning
Recent studies show that private training data can be leaked through the gradients sharing mechanism deployed in distributed machine learning systems, such as federated learning (FL). Increasing batch size to complicate data recovery is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data leakage attack with theoretical justification to efficiently recover batch data from the shared aggregated gradients. We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE). Comparing to existing data leakage attacks, our extensive experimental results on vertical FL settings demonstrate the effectiveness of CAFE to perform large-batch data leakage attack with improved data recovery quality. We also propose a practical countermeasure to mitigate CAFE. Our results suggest that private data participated in standard FL, especially the vertical case, have a high risk of being leaked from the training gradients. Our analysis implies unprecedented and practical data leakage risks in those learning settings. The code of our work is available at https://github.com/DeRafael/CAFE.
- Europe > Austria > Vienna (0.14)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (0.67)