Goto

Collaborating Authors

 Jacot, Arthur


Shallow diffusion networks provably learn hidden low-dimensional structure

arXiv.org Machine Learning

Generative models learn to sample from a target probability distribution given a dataset of examples. Applications are pervasive, and include language modeling (Li et al., 2022), high-fidelity image generation (Rombach et al., 2022), de-novo drug design (Watson et al., 2023), and molecular dynamics (Arts et al., 2023). Recent years have witnessed extremely rapid advancements in the field of generative modeling, particularly with the development of models based on dynamical transport of measure(Santambrogio, 2015), such as diffusion-based generative models (Ho et al., 2020; Song et al., 2021), stochastic interpolants (Albergo et al., 2023), flow matching(Lipman et al., 2023), and rectified flow(Liu et al., 2023) approaches. Yet, despite their strong empirical performance and well-grounded mathematical formulation, a theoretical understanding of how and why these large-scale generative models work is still in its infancy. A promising line of recent research has shown that the problem of sampling from an arbitrarily complex distribution can be reduced to unsupervised learning: for diffusion models, if an accurate velocity or score field can be estimated from data, then high-quality samples can be generated via numerical simulation(Chen et al., 2023a; Lee et al., 2023). While deeply insightful, these works leave open the difficulty of statistical estimation, and therefore raise the possibility that the sampling problem's true difficulty is hidden in the complexity of learning. In this work, we address this fundamental challenge by presenting an end-to-end analysis of sampling with score-based diffusion models. To balance tractability of the analysis with empirical relevance, we study the Barron space of single-layer neural networks (E et al., 2019; Bach, 2017).


Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse

arXiv.org Machine Learning

Among the many possible interpolators that a deep neural network (DNN) can find, Papyan et al. (2020) showed a strong bias of gradient-based training towards representations with a highly symmetric structure in the penultimate layer, which was dubbed neural collapse (NC). In particular, the feature vectors of the training data in the penultimate layer collapse to a single vector per class (NC1); these vectors form orthogonal or simplex equiangular tight frames (NC2), and they are aligned with the last layer's row weight vectors (NC3). The question of why and how neural collapse emerges has been considered by a popular line of research, see e.g. Lu & Steinerberger (2022); E & Wojtowytsch (2022) and the discussion in Section 2. Many of these works focus on a simplified mathematical framework: the unconstrained features model (UFM) (Mixon et al., 2020; Han et al., 2022; Zhou et al., 2022a), corresponding to the joint optimization over the last layer's weights and the penultimate layer's feature representations, which are treated as free variables. To account for the existence of the training data and of all the layers before the penultimate (i.e., the backbone of the network), some form of regularization on the free features is usually added. A number of papers has proved the optimality of NC in this model (Lu & Steinerberger, 2022; E & Wojtowytsch, 2022), its emergence with gradient-based methods (Mixon et al., 2020; Han et al., 2022) and a benign loss landscape (Zhou et al., 2022a; Zhu et al., 2021). However, the major drawback of the UFM lies in its data-agnostic nature: it only acknowledges the presence of training data and backbone through a simple form of regularization (e.g., Frobenius norm or sphere constraint), which is far from being equivalent to end-to-end training.


How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning

arXiv.org Machine Learning

We show that deep neural networks (DNNs) can efficiently learn any composition of functions with bounded $F_{1}$-norm, which allows DNNs to break the curse of dimensionality in ways that shallow networks cannot. More specifically, we derive a generalization bound that combines a covering number argument for compositionality, and the $F_{1}$-norm (or the related Barron norm) for large width adaptivity. We show that the global minimizer of the regularized loss of DNNs can fit for example the composition of two functions $f^{*}=h\circ g$ from a small number of observations, assuming $g$ is smooth/regular and reduces the dimensionality (e.g. $g$ could be the modulo map of the symmetries of $f^{*}$), so that $h$ can be learned in spite of its low regularity. The measures of regularity we consider is the Sobolev norm with different levels of differentiability, which is well adapted to the $F_{1}$ norm. We compute scaling laws empirically and observe phase transitions depending on whether $g$ or $h$ is harder to learn, as predicted by our theory.


Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets

arXiv.org Machine Learning

We study Leaky ResNets, which interpolate between ResNets ($\tilde{L}=0$) and Fully-Connected nets ($\tilde{L}\to\infty$) depending on an 'effective depth' hyper-parameter $\tilde{L}$. In the infinite depth limit, we study 'representation geodesics' $A_{p}$: continuous paths in representation space (similar to NeuralODEs) from input $p=0$ to output $p=1$ that minimize the parameter norm of the network. We give a Lagrangian and Hamiltonian reformulation, which highlight the importance of two terms: a kinetic energy which favors small layer derivatives $\partial_{p}A_{p}$ and a potential energy that favors low-dimensional representations, as measured by the 'Cost of Identity'. The balance between these two forces offers an intuitive understanding of feature learning in ResNets. We leverage this intuition to explain the emergence of a bottleneck structure, as observed in previous work: for large $\tilde{L}$ the potential energy dominates and leads to a separation of timescales, where the representation jumps rapidly from the high dimensional inputs to a low-dimensional representation, move slowly inside the space of low-dimensional representations, before jumping back to the potentially high-dimensional outputs. Inspired by this phenomenon, we train with an adaptive layer step-size to adapt to the separation of timescales.


Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes

arXiv.org Machine Learning

The training dynamics of linear networks are well studied in two distinct setups: the lazy regime and balanced/active regime, depending on the initialization and width of the network. We provide a surprisingly simple unyfing formula for the evolution of the learned matrix that contains as special cases both lazy and balanced regimes but also a mixed regime in between the two. In the mixed regime, a part of the network is lazy while the other is balanced. More precisely the network is lazy along singular values that are below a certain threshold and balanced along those that are above the same threshold. At initialization, all singular values are lazy, allowing for the network to align itself with the task, so that later in time, when some of the singular value cross the threshold and become active they will converge rapidly (convergence in the balanced regime is notoriously difficult in the absence of alignment). The mixed regime is the `best of both worlds': it converges from any random initialization (in contrast to balanced dynamics which require special initialization), and has a low rank bias (absent in the lazy dynamics). This allows us to prove an almost complete phase diagram of training behavior as a function of the variance at initialization and the width, for a MSE training task.


Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning

arXiv.org Machine Learning

We describe the emergence of a Convolution Bottleneck (CBN) structure in CNNs, where the network uses its first few layers to transform the input representation into a representation that is supported only along a few frequencies and channels, before using the last few layers to map back to the outputs. We define the CBN rank, which describes the number and type of frequencies that are kept inside the bottleneck, and partially prove that the parameter norm required to represent a function $f$ scales as depth times the CBN rank $f$. We also show that the parameter norm depends at next order on the regularity of $f$. We show that any network with almost optimal parameter norm will exhibit a CBN structure in both the weights and - under the assumption that the network is stable under large learning rate - the activations, which motivates the common practice of down-sampling; and we verify that the CBN results still hold with down-sampling. Finally we use the CBN structure to interpret the functions learned by CNNs on a number of tasks.


Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff

arXiv.org Machine Learning

Previous work has shown that DNNs with large depth $L$ and $L_{2}$-regularization are biased towards learning low-dimensional representations of the inputs, which can be interpreted as minimizing a notion of rank $R^{(0)}(f)$ of the learned function $f$, conjectured to be the Bottleneck rank. We compute finite depth corrections to this result, revealing a measure $R^{(1)}$ of regularity which bounds the pseudo-determinant of the Jacobian $\left|Jf(x)\right|_{+}$ and is subadditive under composition and addition. This formalizes a balance between learning low-dimensional representations and minimizing complexity/irregularity in the feature maps, allowing the network to learn the `right' inner dimension. Finally, we prove the conjectured bottleneck structure in the learned features as $L\to\infty$: for large depths, almost all hidden representations are approximately $R^{(0)}(f)$-dimensional, and almost all weight matrices $W_{\ell}$ have $R^{(0)}(f)$ singular values close to 1 while the others are $O(L^{-\frac{1}{2}})$. Interestingly, the use of large learning rates is required to guarantee an order $O(L)$ NTK which in turns guarantees infinite depth convergence of the representations of almost all layers.


Implicit bias of SGD in $L_{2}$-regularized linear DNNs: One-way jumps from high to low rank

arXiv.org Machine Learning

The $L_{2}$-regularized loss of Deep Linear Networks (DLNs) with more than one hidden layers has multiple local minima, corresponding to matrices with different ranks. In tasks such as matrix completion, the goal is to converge to the local minimum with the smallest rank that still fits the training data. While rank-underestimating minima can be avoided since they do not fit the data, GD might get stuck at rank-overestimating minima. We show that with SGD, there is always a probability to jump from a higher rank minimum to a lower rank one, but the probability of jumping back is zero. More precisely, we define a sequence of sets $B_{1}\subset B_{2}\subset\cdots\subset B_{R}$ so that $B_{r}$ contains all minima of rank $r$ or less (and not more) that are absorbing for small enough ridge parameters $\lambda$ and learning rates $\eta$: SGD has prob. 0 of leaving $B_{r}$, and from any starting point there is a non-zero prob. for SGD to go in $B_{r}$.


Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions

arXiv.org Artificial Intelligence

We then inquire under which conditions the global minima of the loss recover the'true' rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising. There has been a lot of recent interest in the so-called implicit bias of DNNs, which describes what functions are favored by a network when fitting the training data. Different network architectures (choice of nonlinearity, depth, width of the network, and more) and training procedures (initialization, optimization algorithm, loss) can lead to widely different biases. In contrast to the so-called kernel regime where the implicit bias is described by the Neural Tangent Kernel (Jacot et al., 2018), there are several active regimes (also called rich or feature-learning regimes), whose implicit bias often feature a form sparsity that is absent from the kernel regime.


Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis

arXiv.org Machine Learning

Spectral analysis is a powerful tool, decomposing any function into simpler parts. In machine learning, Mercer's theorem generalizes this idea, providing for any kernel and input distribution a natural basis of functions of increasing frequency. More recently, several works have extended this analysis to deep neural networks through the framework of Neural Tangent Kernel. In this work, we analyze the layer-wise spectral bias of Deep Neural Networks and relate it to the contributions of different layers in the reduction of generalization error for a given target function. We utilize the properties of Hermite polynomials and Spherical Harmonics to prove that initial layers exhibit a larger bias towards high-frequency functions defined on the unit sphere. We further provide empirical results validating our theory in high dimensional datasets for Deep Neural Networks.