Sharing Models or Coresets: A Study based on Membership Inference Attack
Lu, Hanlin, Liu, Changchang, He, Ting, Wang, Shiqiang, Chan, Kevin S.
While each approach preserves data them is still missing. In this work, we take a first step privacy to some extent thanks to not sharing the towards filling this gap by comparing the federated learning raw data, the exact extent of protection is unclear approach and the coreset-based approach in terms of (1) under sophisticated attacks that try to infer the the accuracy of the target machine learning model we want raw data from the shared information. We present to train, (2) the communication cost during training, and the first comparison between the two approaches (3) the leakage of the private training data. In particular, in terms of target model accuracy, communication although neither approach will require the data sources to cost, and data privacy, where the last is measured directly share their data, it has been shown in (Shokri et al., by the accuracy of a state-of-the-art attack strategy 2017) that models derived from a dataset can be used to called the membership inference attack. Our infer the membership of the dataset (i.e., whether or not a experiments quantify the accuracy-privacy-cost given data record is contained in the dataset), known as the tradeoff of each approach, and reveal a nontrivial membership inference attack (MIA). Since the coreset can comparison that can be used to guide the design also be viewed as a model, we can thus use the accuracy of of model training processes.
Jul-6-2020
- Country:
- Europe > Austria
- Vienna (0.14)
- North America > United States (0.93)
- Europe > Austria
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: