Can Implicit Bias Explain Generalization Stochastic Convex Optimization Case Study
–Neural Information Processing Systems
One of the great mysteries of contemporary machine learning is the impressive success ofunregularized and overparameterized learningalgorithms. In detail,current machinelearningpracticeis to trainmodels with far more parameters than samples and let the algorithmfit the data, oftentimes without any type of regularization. In fact, these algorithms are so overcapacitated that they can even memorize and fit random data (Neyshabur et al., 2015; Zhang et al., 2017). Yet, when trained on real-life data, these algorithms show remarkable performance in generalizing to unseen samples. This phenomenon is often attributed to what is described as theimplicit-regularization of an algorithm (Neyshabur et al., 2015). Implicit regularization roughly refers to the learner's preference to implicitly choosing certain structured solutionsas if some explicit regularization term appeared in its objective.
Neural Information Processing Systems
Feb-8-2026, 12:17:29 GMT
- Country:
- Asia > Japan
- Kyūshū & Okinawa > Okinawa (0.04)
- Europe
- France (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America > United States
- California > San Diego County > San Diego (0.04)
- Asia > Japan
- Technology: