Goto

Collaborating Authors

 provider









Microsoft crosses privacy line few expected

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG . Your phone shares data at night: Here's how to stop it'Everything is on the table' in Nancy Guthrie search, former FBI assistant director says Spain's Pedro Sanchez vows crackdown on social media at World Government Summit How Ring will use new'Fire Watch' tool in real time FBI director defends Georgia election probe, touts'historic' crime drop Why Trump's lawsuit against the IRS is'something you don't see every day' Inside the FBI's investigation into paid protest groups Tech expert warns social media execs sound like'drug lords' as addiction trial begins Fox News Flash top headlines are here. Check out what's clicking on FoxNews.com.



User-Specified Local Differential Privacy in Unconstrained Adaptive Online Learning

Neural Information Processing Systems

Local differential privacy is a strong notion of privacy in which the provider of the data guarantees privacy by perturbing the data with random noise. In the standard application of local differential differential privacy the distribution of the noise is constant and known by the learner. In this paper we generalize this approach by allowing the provider of the data to choose the distribution of the noise without disclosing any parameters of the distribution to the learner, under the constraint that the distribution is symmetrical. We consider this problem in the unconstrained Online Convex Optimization setting with noisy feedback. In this setting the learner receives the subgradient of a loss function, perturbed by noise, and aims to achieve sublinear regret with respect to some competitor, without constraints on the norm of the competitor. We derive the first algorithms that have adaptive regret bounds in this setting, i.e. our algorithms adapt to the unknown competitor norm, unknown noise, and unknown sum of the norms of the subgradients, matching state of the art bounds in all cases.