Dataset-Level Attribute Leakage in Collaborative Learning
Zhang, Wanrong, Tople, Shruti, Ohrimenko, Olga
Attacks on We show that such multi-party computation can cause leakage of global properties about the data are concerned leakage of global dataset properties between the parties with learning information about the data owner as opposed even when parties obtain only black-box access to the to individuals whose privacy may be violated via final model. In particular, a "curious" party can infer membership or attribute attacks. The global properties the distribution of sensitive attributes in other parties' of a dataset are confidential when they are related to the data with high accuracy. This raises concerns regarding proprietary information or IP that the data contains, and the confidentiality of properties pertaining to the whole its owner is not willing to share. Consider the advantage dataset as opposed to individual data records. We show one can gain by learning demographic information that our attack can leak population-level properties in of customers or sales distribution across competitor's datasets of different types, including tabular, text, and products.
Oct-20-2020
- Country:
- North America > United States (0.29)
- Genre:
- Research Report > Experimental Study (0.69)
- Industry:
- Health & Medicine (0.93)
- Information Technology > Security & Privacy (1.00)
- Law (0.66)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.46)
- Statistical Learning (1.00)
- Representation & Reasoning (0.93)
- Machine Learning
- Communications (1.00)
- Data Science > Data Mining (0.93)
- Security & Privacy (1.00)
- Artificial Intelligence
- Information Technology