Understanding Practical Membership Privacy of Deep Learning
Tobaben, Marlon, Pradhan, Gauri, He, Yuan, Jälkö, Joonas, Honkela, Antti
–arXiv.org Artificial Intelligence
We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models.We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference. In terms of data set properties, we find a strong power law dependence between the number of examples per class in the data and the MIA vulnerability, as measured by true positive rate of the attack at a low false positive rate. For an individual sample, large gradients at the end of training are strongly correlated with MIA vulnerability.
arXiv.org Artificial Intelligence
Feb-7-2024
- Country:
- Europe (1.00)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > California
- San Francisco County > San Francisco (0.14)
- Canada > Ontario
- Genre:
- Research Report (0.64)
- Industry:
- Health & Medicine (0.46)
- Information Technology > Security & Privacy (0.68)
- Technology: