Understanding Practical Membership Privacy of Deep Learning

Tobaben, Marlon, Pradhan, Gauri, He, Yuan, Jälkö, Joonas, Honkela, Antti

arXiv.org Artificial Intelligence 

We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models.We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference. In terms of data set properties, we find a strong power law dependence between the number of examples per class in the data and the MIA vulnerability, as measured by true positive rate of the attack at a low false positive rate. For an individual sample, large gradients at the end of training are strongly correlated with MIA vulnerability.