panoramia
PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining
We present PANORAMIA, a privacy leakage measurement framework for machine learning models that relies on membership inference attacks using generated data as non-members. By relying on generated non-member data, PANORAMIA eliminates the common dependency of privacy measurement tools on in-distribution non-member data. As a result, PANORAMIA does not modify the model, training data, or training process, and only requires access to a subset of the training data. We evaluate PANORAMIA on ML models for image and tabular data classification, as well as on large-scale language models.
- North America > Canada > British Columbia (0.40)
- North America > Canada > Quebec > Montreal (0.40)
- North America > United States (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (0.92)
- North America > Canada > British Columbia (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (0.92)
PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining
We present PANORAMIA, a privacy leakage measurement framework for machine learning models that relies on membership inference attacks using generated data as non-members. By relying on generated non-member data, PANORAMIA eliminates the common dependency of privacy measurement tools on in-distribution non-member data. As a result, PANORAMIA does not modify the model, training data, or training process, and only requires access to a subset of the training data. We evaluate PANORAMIA on ML models for image and tabular data classification, as well as on large-scale language models.
PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining
Kazmi, Mishaal, Lautraite, Hadrien, Akbari, Alireza, Soroco, Mauricio, Tang, Qiaoyue, Wang, Tao, Gambs, Sébastien, Lécuyer, Mathias
We introduce a privacy auditing scheme for ML models that relies on membership inference attacks using generated data as "non-members". This scheme, which we call PANORAMIA, quantifies the privacy leakage for large-scale ML models without control of the training process or model re-training and only requires access to a subset of the training data. To demonstrate its applicability, we evaluate our auditing scheme across multiple ML domains, ranging from image and tabular data classification to large-scale language models.
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (2 more...)