Goto

Collaborating Authors

 architectural bias



Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias

Neural Information Processing Systems

Despite the phenomenal success of deep neural networks in a broad range of learning tasks, there is a lack of theory to understand the way they work. In particular, Convolutional Neural Networks (CNNs) are known to perform much better than Fully-Connected Networks (FCNs) on spatially structured data: the architectural structure of CNNs benefits from prior knowledge on the features of the data, for instance their translation invariance. The aim of this work is to understand this fact through the lens of dynamics in the loss landscape. We introduce a method that maps a CNN to its equivalent FCN (denoted as eFCN). Such an embedding enables the comparison of CNN and FCN training dynamics directly in the FCN space.



Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias

Neural Information Processing Systems

Despite the phenomenal success of deep neural networks in a broad range of learning tasks, there is a lack of theory to understand the way they work. In particular, Convolutional Neural Networks (CNNs) are known to perform much better than Fully-Connected Networks (FCNs) on spatially structured data: the architectural structure of CNNs benefits from prior knowledge on the features of the data, for instance their translation invariance. The aim of this work is to understand this fact through the lens of dynamics in the loss landscape. We introduce a method that maps a CNN to its equivalent FCN (denoted as eFCN). Such an embedding enables the comparison of CNN and FCN training dynamics directly in the FCN space.


Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks

Peleg, Amit, Hein, Matthias

arXiv.org Artificial Intelligence

The implicit bias Neural networks typically generalize well when of Stochastic Gradient Descent (SGD) is thus often thought fitting the data perfectly, even though they are to be the main reason behind generalization (Arora et al., heavily overparameterized. Many factors have 2019; Shah et al., 2020). A recent thought-provoking study been pointed out as the reason for this phenomenon, by Chiang et al. (2023) suggests the idea of the volume hypothesis including an implicit bias of stochastic for generalization: well-generalizing basins of the gradient descent (SGD) and a possible simplicity loss occupy a significantly larger volume in the weight space bias arising from the neural network architecture. of neural networks than basins that do not generalize well. The goal of this paper is to disentangle the factors They argue that the generalization performance of neural that influence generalization stemming from optimization networks is primarily a bias of the architecture and that the and architectural choices by studying implicit bias of SGD is only a secondary effect. To this end, random and SGD-optimized networks that achieve they randomly sample networks that achieve zero training zero training error. We experimentally show, in error (which they term Guess and Check (G&C)) and argue the low sample regime, that overparameterization that the generalization performance of these networks is in terms of increasing width is beneficial for generalization, qualitatively similar to networks found by SGD. and this benefit is due to the bias of SGD and not due to an architectural bias. In contrast, In this work, we revisit the approach of Chiang et al. (2023) for increasing depth, overparameterization and study it in detail to disentangle the effects of implicit is detrimental for generalization, but random and bias of SGD from a potential bias of the choice of architecture. SGD-optimized networks behave similarly, so this As we have to compare to randomly sampled neural can be attributed to an architectural bias.


Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias

d', Ascoli, Stéphane, Sagun, Levent, Biroli, Giulio, Bruna, Joan

Neural Information Processing Systems

Despite the phenomenal success of deep neural networks in a broad range of learning tasks, there is a lack of theory to understand the way they work. In particular, Convolutional Neural Networks (CNNs) are known to perform much better than Fully-Connected Networks (FCNs) on spatially structured data: the architectural structure of CNNs benefits from prior knowledge on the features of the data, for instance their translation invariance. The aim of this work is to understand this fact through the lens of dynamics in the loss landscape. We introduce a method that maps a CNN to its equivalent FCN (denoted as eFCN). Such an embedding enables the comparison of CNN and FCN training dynamics directly in the FCN space.


Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias

d'Ascoli, Stéphane, Sagun, Levent, Bruna, Joan, Biroli, Giulio

arXiv.org Machine Learning

Despite the phenomenal success of deep neural networks in a broad range of learning tasks, there is a lack of theory to understand the way they work. In particular, Convolutional Neural Networks (CNNs) are known to perform much better than Fully-Connected Networks (FCNs) on spatially structured data: the architectural structure of CNNs benefits from prior knowledge on the features of the data, for instance their translation invariance. The aim of this work is to understand this fact through the lens of dynamics in the loss landscape. We introduce a method that maps a CNN to its equivalent FCN (denoted as eFCN). Such an embedding enables the comparison of CNN and FCN training dynamics directly in the FCN space. We use this method to test a new training protocol, which consists in training a CNN, embedding it to FCN space at a certain 'switch time' $t_w$, then resuming the training in FCN space. We observe that for all switch times, the deviation from the CNN subspace is small, and the final performance reached by the eFCN is higher than that reachable by the standard FCN. More surprisingly, for some intermediate switch times, the eFCN even outperforms the CNN it stemmed from. The practical interest of our protocol is limited by the very large size of the highly sparse eFCN. However, it offers an interesting insight into the persistence of the architectural bias under the stochastic gradient dynamics even in the presence of a huge number of additional degrees of freedom. It shows the existence of some rare basins in the FCN space associated with very good generalization. These can be accessed thanks to the CNN prior, and are otherwise missed.