A General Framework for Data-Use Auditing of ML Models
Huang, Zonghao, Gong, Neil Zhenqiang, Reiter, Michael K.
–arXiv.org Artificial Intelligence
Passive data auditing, commonly referred as membership inference Auditing the use of data in training machine-learning (ML) models [7, 13, 27, 65, 83], infers if a data sample is a member of an is an increasingly pressing challenge, as myriad ML practitioners ML model's training set. However, such passive techniques have an routinely leverage the effort of content creators to train models without inherent limitation: they do not provide any quantitative guarantee their permission. In this paper, we propose a general method for the false-detection of their inference results. In contrast, proactive to audit an ML model for the use of a data-owner's data in training, data auditing techniques embed marks into data before its publication without prior knowledge of the ML task for which the data might [24, 38, 39, 59, 74, 79, 82] and can provide detection results be used.
arXiv.org Artificial Intelligence
Aug-4-2024
- Country:
- Asia > Russia (0.04)
- Europe
- Russia (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States
- California > Santa Clara County
- Palo Alto (0.04)
- New Jersey > Hudson County
- Hoboken (0.04)
- North Carolina > Durham County
- Durham (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- California > Santa Clara County
- Canada > Ontario
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Transportation > Ground (0.67)
- Technology: