How Does Overparameterization Affect Features?

Duzgun, Ahmet Cagri, Jelassi, Samy, Li, Yuanzhi

arXiv.org Artificial Intelligence 

Overparameterization, the condition where models have more parameters than necessary to fit their training loss, is a crucial factor for the success of deep learning. However, the characteristics of the features learned by overparameterized networks are not well understood. In this work, we explore this question by comparing models with the same architecture but different widths. We first examine the expressivity of the features of these models, and show that the feature space of overparameterized networks cannot be spanned by concatenating many underparameterized features, and vice versa. This reveals that both overparameterized and underparameterized networks acquire some distinctive features. We then evaluate the performance of these models, and find that overparameterized networks outperform underparameterized networks, even when many of the latter are concatenated. We corroborate these findings using a VGG-16 and ResNet18 on CIFAR-10 and a Transformer on the MNLI classification dataset. Finally, we propose a toy setting to explain how overparameterized networks can learn some important features that the underparamaterized networks cannot learn. Overparameterized neural networks, which have more parameters than necessary to fit the training data, have achieved remarkable success in various tasks, such as image classification (He et al., 2016; Krizhevsky et al., 2017), object detection (Girshick et al., 2014; Redmon et al., 2016) or text classification (Zhang et al., 2015; Johnson & Zhang, 2016). However, the theoretical understanding of why these networks outperform underparameterized ones, which have fewer parameters and less capacity, is still limited.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found