Laplace Transform Interpretation of Differential Privacy

Chourasia, Rishav, Javaid, Uzair, Sikdar, Biplap

arXiv.org Artificial Intelligence 

Differential privacy (DP) [13] has become a widely adopted standard for quantifying privacy of algorithms that process statistical data. In simple terms, differential privacy bounds the influence a single data-point may have on the outcome probabilities. Being a statistical property, the design of differentially private algorithms involves a pen-and-paper analysis of any randomness internal to the processing that obscures the influence a data-point might have on its output. A clear understanding of the nature of differential privacy notions is therefore tantamount to study and design of privacy-preserving algorithms. Throughout its exploration, various functional interpretations of the concept of differential privacy have emerged over the years. These include the privacy-profile curve δ(ϵ) [5] that traces the (ϵ, δ)-DP point guarantees, the f-DP [11] view of worst-case trade-off curve between type I and type II errors for hypothesis testing membership [19, 6], the Rényi DP [23] function of order q that admits a natural analytical composition [1, 23], the view of the privacy loss distribution (PLD) [29] that allows for approximate numerical composition [20, 18], and the recent characteristic function formulation of the dominating privacy loss random variables Zhu et al. [32]. Each of these formalisms have their own properties and use-cases, and none of them seem to be superior in all aspects. Regardless of their differences, they all have some shared difficulties--certain types of manipulations on them are harder to perform in the time-domain, but considerably simpler to do in the frequency-domain. For instance, Koskela et al. [20] noted that composing PLDs of two mechanisms involve convolving their probability densities, which can be numerically approximated efficiently