Individual Privacy Accounting via a Renyi Filter

Feldman, Vitaly, Zrnic, Tijana

arXiv.org Machine Learning 

Understanding how privacy of an individual degrades as the number of analyses using their data grows is of paramount importance in privacy-preserving data analysis. On one hand, this allows individuals to participate in multiple disjoint statistical analyses, all the while knowing that their privacy cannot be compromised by aggregating the resulting reports. On the other hand, this feature is crucial for privacy-preserving algorithm design -- instead of having to reason about the privacy properties of a complex algorithm, it allows reasoning about the privacy of the subroutines that make up the final algorithm. For differential privacy [11], this accounting of privacy losses is typically done using composition theorems. Importantly, given that statistical analyses often rely on the outputs of previous analyses, and that algorithmic subroutines feed into one another, the composition theorems need to be adaptive, namely, allow the choice of which algorithm to run next to depend on the outputs of all previous computations. For example, in gradient descent, the computation of the gradient depends on the value of the current iterate, which itself is the output of the previous steps of the algorithm. Given the central role that adaptive composition theorems play for differentially private data analysis, they have been investigated in numerous works (e.g.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found