Goto

Collaborating Authors

On the Differential Privacy of Bayesian Inference

AAAI Conferences

The latter achieves While B wants to learn as much as possible from the data, stealth through consistent posterior updates. For general she doesn't want A to learn about any individual datum. Bayesian networks, posteriors may be nonparametric. In This is for example the case where A is an insurance agency, this case, we explore a mechanism (Dimitrakakis et al. 2014) the data are medical records, and B wants to convey the efficacy which samples from the posterior to answer queries--no additional of drugs to the agency, without revealing the specific noise is injected. We complement our study with illnesses of individuals in the population. Such requirements a maximum a posteriori estimator that leverages the exponential of privacy are of growing interest in the learning (Chaudhuri mechanism (McSherry and Talwar 2007). Our utility and Hsu 2012; Duchi, Jordan, and Wainwright 2013), theoretical and privacy bounds connect privacy and graph/dependency computer science (Dwork and Smith 2009; McSherry structure, and are complemented by illustrative experiments and Talwar 2007) and databases communities (Barak et al. with Bayesian naïve Bayes and linear regression.


Differentially Private Dropout

arXiv.org Machine Learning

Large data collections required for the training of neural networks often contain sensitive information such as the medical histories of patients, and the privacy of the training data must be preserved. In this paper, we introduce a dropout technique that provides an elegant Bayesian interpretation to dropout, and show that the intrinsic noise added, with the primary goal of regularization, can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving dropout algorithm on benchmark datasets.


On the Differential Privacy of Bayesian Inference

arXiv.org Machine Learning

We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on proba-bilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian na{\"i}ve Bayes and Bayesian linear regression illustrate the application of our mechanisms.


Differentially Private Variational Inference for Non-conjugate Models

arXiv.org Machine Learning

Many machine learning applications are based on data collected from people, such as their tastes and behaviour as well as biological traits and genetic data. Regardless of how important the application might be, one has to make sure individuals' identities or the privacy of the data are not compromised in the analysis. Differential privacy constitutes a powerful framework that prevents breaching of data subject privacy from the output of a computation. Differentially private versions of many important Bayesian inference methods have been proposed, but there is a lack of an efficient unified approach applicable to arbitrary models. In this contribution, we propose a differentially private variational inference method with a very wide applicability. It is built on top of doubly stochastic variational inference, a recent advance which provides a variational solution to a large class of models. We add differential privacy into doubly stochastic variational inference by clipping and perturbing the gradients. The algorithm is made more efficient through privacy amplification from subsampling. We demonstrate the method can reach an accuracy close to non-private level under reasonably strong privacy guarantees, clearly improving over previous sampling-based alternatives especially in the strong privacy regime.


Renyi Differential Privacy Mechanisms for Posterior Sampling

Neural Information Processing Systems

With the newly proposed privacy definition of Rényi Differential Privacy (RDP) in (Mironov, 2017), we re-examine the inherent privacy of releasing a single sample from a posterior distribution. We exploit the impact of the prior distribution in mitigating the influence of individual data points. In particular, we focus on sampling from an exponential family and specific generalized linear models, such as logistic regression. We propose novel RDP mechanisms as well as offering a new RDP analysis for an existing method in order to add value to the RDP framework. Each method is capable of achieving arbitrary RDP privacy guarantees, and we offer experimental results of their efficacy.