On the Differential Privacy of Bayesian Inference
Zhang, Zuhe (University of Melbourne) | Rubinstein, Benjamin I. P. (University of Melbourne) | Dimitrakakis, Christos (Univ-Lille-3 and Chalmers University of Technology)
The latter achieves While B wants to learn as much as possible from the data, stealth through consistent posterior updates. For general she doesn't want A to learn about any individual datum. Bayesian networks, posteriors may be nonparametric. In This is for example the case where A is an insurance agency, this case, we explore a mechanism (Dimitrakakis et al. 2014) the data are medical records, and B wants to convey the efficacy which samples from the posterior to answer queries--no additional of drugs to the agency, without revealing the specific noise is injected. We complement our study with illnesses of individuals in the population. Such requirements a maximum a posteriori estimator that leverages the exponential of privacy are of growing interest in the learning (Chaudhuri mechanism (McSherry and Talwar 2007). Our utility and Hsu 2012; Duchi, Jordan, and Wainwright 2013), theoretical and privacy bounds connect privacy and graph/dependency computer science (Dwork and Smith 2009; McSherry structure, and are complemented by illustrative experiments and Talwar 2007) and databases communities (Barak et al. with Bayesian naïve Bayes and linear regression.
Apr-19-2016