Unleashing the Power of Randomization in Auditing Differentially Private ML
–Neural Information Processing Systems
We present a rigorous methodology for auditing differentially private machine learning by adding multiple carefully designed examples called canaries. We take a first principles approach based on three key components. First, we introduce Lifted Differential Privacy (LiDP) that expands the definition of differential privacy to handle randomized datasets. This gives us the freedom to design randomized canaries. Second, we audit LiDP by trying to distinguish between the model trained with K canaries versus K-1 canaries in the dataset, leaving one canary out.
Neural Information Processing Systems
Jan-19-2025, 22:49:30 GMT
- Technology: