Goto

Collaborating Authors

 colt


Differentially Private Algorithms for Learning Mixtures of Separated Gaussians

Gautam Kamath, Or Sheffet, Vikrant Singhal, Jonathan Ullman

Neural Information Processing Systems

In this work, westudy algorithms for learning Gaussian mixtures subject todifferential privacy[32], which has become thede facto standard for individual privacy in statistical analysis of sensitive data. Intuitively, differential privacy guarantees that the output of the algorithm does not depend significantly on any one individual's data, which in this case means any one sample.




sup

Neural Information Processing Systems

LetT be the time horizon andPT be the path-length that essentially reflects the non-stationarity of environments, the state-of-the-art dynamicregretis O( p T(1+PT)).


ExploitingtheSurrogateGapinOnlineMulticlass Classification

Neural Information Processing Systems

In online multiclass classification a learner has to repeatedly predict the label that corresponds to a feature vector. Algorithms in this setting have a wide range of applications ranging from predicting the outcomes ofsport matches torecommender systems.


EfficientMethodsforNon-stationaryOnlineLearning

Neural Information Processing Systems

Inparticular, dynamic regret [Zinkevich,2003;Zhang et al.,2018a]and adaptiveregret [Hazan and Seshadhri, 2009; Daniely et al., 2015] are proposed as two principled metrics to guide the algorithm design. Theunknowncomparators orunknown intervals bring considerable uncertainty to online optimization.


Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n)

Neural Information Processing Systems

The sharpest known high probability generalization bounds for uniformly stable algorithms (Feldman, Vondrak, NeurIPS 2018, COLT, 2019), (Bousquet, Klochkov, Zhivotovskiy, COLT, 2020) contain a generally inevitable sampling error term of order $\Theta(1/\sqrt{n})$. When applied to excess risk bounds, this leads to suboptimal results in several standard stochastic convex optimization problems. We show that if the so-called Bernstein condition is satisfied, the term $\Theta(1/\sqrt{n})$ can be avoided, and high probability excess risk bounds of order up to $O(1/n)$ are possible via uniform stability. Using this result, we show a high probability excess risk bound with the rate $O(\log n/n)$ for strongly convex and Lipschitz losses valid for \emph{any} empirical risk minimization method.




Reviews: Weighted Linear Bandits for Non-Stationary Environments

Neural Information Processing Systems

Update (after reading the rebuttals): After reading the rebuttal of authors, I have addressed my concerns on the novelty of the new self-normalized concentration, since the key point is that the coefficient of regularizer is changing. I indeed appreciate this work. The idea of this paper is natural but there indeed exist technical challenges, and the authors address these issues elegantly. So I think it deserves an acceptance. Nevertheless, there are still many typos in current verison besides those listed before, for example, in Theorem 2, eq.