Quantum Information Ordering and Differential Privacy

Dasgupta, Ayanava, Warsi, Naqueeb Ahmad, Hayashi, Masahito

arXiv.org Artificial Intelligence 

We study quantum differential privacy (QDP) by defining a notion of the order of informativeness between two pairs of quantum states. In particular, we show that if the hypothesis testing divergence of the one pair dominates over that of the other pair, then this dominance holds for every f -divergence. This approach completely characterizes (ε,δ)-QDP mechanisms by identifying the most informative (ε,δ)-DP quantum state pairs. We apply this to analyze the stability of quantum differentially private learning algorithms, generalizing classical results to the case δ > 0. Additionally, we study precise limits for privatized hypothesis testing and privatized quantum parameter estimation, including tight upper-bounds on the quantum Fisher information under QDP . Finally, we establish near-optimal contraction bounds for differentially private quantum channels with respect to the hockey-stick divergence. I. Introduction A fundamental challenge in modern machine learning is the trade-off between privacy and information extraction. In this work, we explicitly treat both sides: privacy (ensuring that algorithmic outputs do not reveal significant information about the input data of the respondents) and the investigator's goal to extract as much useful information as possible from data for accurate learning and estimation. With the rapid advancement of machine learning, a key concern is about ensuring the privacy of learning algorithms, meaning that their outputs should not reveal significant information about the input data. Differential privacy (DP) provides a rigorous mathematical framework to balance these opposing requirements. Accordingly, we structure our contributions in three steps: first step (privacy), second step (information extraction under privacy constraints), and third step, the quantum channel setup, where the situation is more complicated, and we mark the transition to each step explicitly in the text. This step develops the privacy side of the trade-off from the respondent's perspective by studying the stability [1], [2] of learning algorithms. From the respondent's viewpoint, privacy means that the inclusion or exclusion of their individual data should not materially affect the mechanism's output, so that they can contribute data without fear of singled-out inference. An algorithm is considered stable if its output does not change drastically when a single respondent's data is changed; this point-wise insensitivity is precisely the respondent-centric guarantee we seek.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found