privacy budget
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Virginia > Albemarle County > Charlottesville (0.04)
- South America > Paraguay > Asunción > Asunción (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Data Science > Data Mining (0.85)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
- North America > United States > Virginia (0.05)
- Asia > China > Hubei Province > Wuhan (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > New York > Rensselaer County > Troy (0.04)
- Europe > Belgium > Flanders > East Flanders > Ghent (0.04)
- North America > United States > Virginia (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- North America > Canada (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning Jiaqi Wang
When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher Ensembles, or P A TE, computes output labels by aggregating the predictions of a (possibly distributed) collection of teacher models via a voting mechanism. The mechanism adds noise to attain a differential privacy guarantee with respect to the teachers' training data. In this work, we observe that this use of noise, which makes P A TE predictions stochastic, enables new forms of leakage of sensitive information. For a given input, our adversary exploits this stochasticity to extract high-fidelity histograms of the votes submitted by the underlying teachers. From these histograms, the adversary can learn sensitive attributes of the input such as race, gender, or age. Although this attack does not directly violate the differential privacy guarantee, it clearly violates privacy norms and expectations, and would not be possible at all without the noise inserted to obtain differential privacy. In fact, counter-intuitively, the attack becomes easier as we add more noise to provide stronger differential privacy. We hope this encourages future work to consider privacy holistically rather than treat differential privacy as a panacea.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
e9bf14a419d77534105016f5ec122d62-Supplemental.pdf
Therefore, if ν() < +, then we can bound(10) with eαν(). To avoid crowded notations, we drop the conditioning onz from Pr[ |ρ = z]. The issue is how to proceed. Let φ be the standard normal density function andΦ be the CDF. The algorithm using SVT suchthat itonly releases the private answerstothe queries if the answer is sufficiently different from the "guess".
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- Europe > Germany (0.04)
- Asia (0.04)