Individual Privacy Accounting via a Renyi Filter
Feldman, Vitaly, Zrnic, Tijana
Understanding how privacy of an individual degrades as the number of analyses using their data grows is of paramount importance in privacy-preserving data analysis. On one hand, this allows individuals to participate in multiple disjoint statistical analyses, all the while knowing that their privacy cannot be compromised by aggregating the resulting reports. On the other hand, this feature is crucial for privacy-preserving algorithm design -- instead of having to reason about the privacy properties of a complex algorithm, it allows reasoning about the privacy of the subroutines that make up the final algorithm. For differential privacy [11], this accounting of privacy losses is typically done using composition theorems. Importantly, given that statistical analyses often rely on the outputs of previous analyses, and that algorithmic subroutines feed into one another, the composition theorems need to be adaptive, namely, allow the choice of which algorithm to run next to depend on the outputs of all previous computations. For example, in gradient descent, the computation of the gradient depends on the value of the current iterate, which itself is the output of the previous steps of the algorithm. Given the central role that adaptive composition theorems play for differentially private data analysis, they have been investigated in numerous works (e.g.
Sep-14-2020
- Country:
- North America > United States > California > Alameda County > Berkeley (0.04)
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: