Can Implicit Bias Explain Generalization Stochastic Convex Optimization Case Study

Neural Information Processing Systems 

One of the great mysteries of contemporary machine learning is the impressive success ofunregularized and overparameterized learningalgorithms. In detail,current machinelearningpracticeis to trainmodels with far more parameters than samples and let the algorithmfit the data, oftentimes without any type of regularization. In fact, these algorithms are so overcapacitated that they can even memorize and fit random data (Neyshabur et al., 2015; Zhang et al., 2017). Yet, when trained on real-life data, these algorithms show remarkable performance in generalizing to unseen samples. This phenomenon is often attributed to what is described as theimplicit-regularization of an algorithm (Neyshabur et al., 2015). Implicit regularization roughly refers to the learner's preference to implicitly choosing certain structured solutionsas if some explicit regularization term appeared in its objective.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found