Goto

Collaborating Authors

 feldman



CoresetforLine-SetsClustering

Neural Information Processing Systems

A natural generalization is to replace this input setP of n points by a setP of n sets inX. The distance from such an input setP P to a setC of centers can then be defined as the distance between the closest point-center pair. This problem is calledk-mean for sets; see e.g.


CoresetforLine-SetsClustering

Neural Information Processing Systems

A natural generalization is to replace this input setP of n points by a setP of n sets inX. The distance from such an input setP P to a setC of centers can then be defined as the distance between the closest point-center pair. This problem is calledk-mean for sets; see e.g.





All ERMs Can Fail in Stochastic Convex Optimization Lower Bounds in Linear Dimension

Burla, Tal, Livni, Roi

arXiv.org Machine Learning

We study the sample complexity of the best-case Empirical Risk Minimizer in the setting of stochastic convex optimization. We show that there exists an instance in which the sample size is linear in the dimension, learning is possible, but the Empirical Risk Minimizer is likely to be unique and to overfit. This resolves an open question by Feldman. We also extend this to approximate ERMs. Building on our construction we also show that (constrained) Gradient Descent potentially overfits when horizon and learning rate grow w.r.t sample size. Specifically we provide a novel generalization lower bound of $Ω\left(\sqrt{ηT/m^{1.5}}\right)$ for Gradient Descent, where $η$ is the learning rate, $T$ is the horizon and $m$ is the sample size. This narrows down, exponentially, the gap between the best known upper bound of $O(ηT/m)$ and existing lower bounds from previous constructions.



LearningwithUser-LevelPrivacy

Neural Information Processing Systems

Releasing seemingly innocuous functions of a data set can easily compromise the privacy of individuals, whether the functions are simple counts [35]orcomplexmachine learning models like deep neural networks [52,30].


Can Implicit Bias Explain Generalization Stochastic Convex Optimization Case Study

Neural Information Processing Systems

One of the great mysteries of contemporary machine learning is the impressive success ofunregularized and overparameterized learningalgorithms. In detail,current machinelearningpracticeis to trainmodels with far more parameters than samples and let the algorithmfit the data, oftentimes without any type of regularization. In fact, these algorithms are so overcapacitated that they can even memorize and fit random data (Neyshabur et al., 2015; Zhang et al., 2017). Yet, when trained on real-life data, these algorithms show remarkable performance in generalizing to unseen samples. This phenomenon is often attributed to what is described as theimplicit-regularization of an algorithm (Neyshabur et al., 2015). Implicit regularization roughly refers to the learner's preference to implicitly choosing certain structured solutionsas if some explicit regularization term appeared in its objective.