Unraveling the Enigma of Double Descent: An In-depth Analysis through the Lens of Learned Feature Space
Gu, Yufei, Zheng, Xiaoqing, Aste, Tomaso
–arXiv.org Artificial Intelligence
Double descent presents a counter-intuitive aspect within the machine learning domain, and researchers have observed its manifestation in various models and tasks. While some theoretical explanations have been proposed for this phenomenon in specific contexts, an accepted theory for its occurrence in deep learning remains yet to be established. In this study, we revisit the phenomenon of double descent and demonstrate that its occurrence is strongly influenced by the presence of noisy data. Through conducting a comprehensive analysis of the feature space of learned representations, we unveil that double descent arises in imperfect models trained with noisy data. We argue that double descent is a consequence of the model first learning the noisy data until interpolation and then adding implicit regularization via over-parameterization acquiring therefore capability to separate the information from the noise. As a well-known property of machine learning, the bias-variance trade-off suggested that the variance of the parameter estimated across samples can be reduced by increasing the bias in the estimated parameters (Geman et al., 1992). When a model is under-parameterized, there exists a combination of high bias and low variance. Under this circumstance, the model becomes trapped in under-fitting, signifying its inability to grasp the fundamental underlying structures present in the data. When a model is over-parameterized, there exists a combination of low bias and high variance. This is usually due to the model over-fitting noisy and unrepresentative data in the training set.
arXiv.org Artificial Intelligence
Dec-5-2023