Goto

Collaborating Authors

 Karkada, Dhruva


Solvable Dynamics of Self-Supervised Word Embeddings and the Emergence of Analogical Reasoning

arXiv.org Machine Learning

The remarkable success of large language models relies on their ability to implicitly learn structured latent representations from the pretraining corpus. As a simpler surrogate for representation learning in language modeling, we study a class of solvable contrastive self-supervised algorithms which we term quadratic word embedding models. These models resemble the word2vec algorithm and perform similarly on downstream tasks. Our main contributions are analytical solutions for both the training dynamics (under certain hyperparameter choices) and the final word embeddings, given in terms of only the corpus statistics. Our solutions reveal that these models learn orthogonal linear subspaces one at a time, each one incrementing the effective rank of the embeddings until model capacity is saturated. Training on WikiText, we find that the top subspaces represent interpretable concepts. Finally, we use our dynamical theory to predict how and when models acquire the ability to complete analogies.


The lazy (NTK) and rich ($\mu$P) regimes: a gentle tutorial

arXiv.org Machine Learning

A central theme of the modern machine learning paradigm is that larger neural networks achieve better performance on a variety of metrics. Theoretical analyses of these overparameterized models have recently centered around studying very wide neural networks. In this tutorial, we provide a nonrigorous but illustrative derivation of the following fact: in order to train wide networks effectively, there is only one degree of freedom in choosing hyperparameters such as the learning rate and the size of the initial weights. This degree of freedom controls the richness of training behavior: at minimum, the wide network trains lazily like a kernel machine, and at maximum, it exhibits feature learning in the so-called $\mu$P regime. In this paper, we explain this richness scale, synthesize recent research results into a coherent whole, offer new perspectives and intuitions, and provide empirical evidence supporting our claims. In doing so, we hope to encourage further study of the richness scale, as it may be key to developing a scientific theory of feature learning in practical deep neural networks.


More is Better in Modern Machine Learning: when Infinite Overparameterization is Optimal and Overfitting is Obligatory

arXiv.org Machine Learning

In our era of enormous neural networks, empirical progress has been driven by the philosophy that more is better. Recent deep learning practice has found repeatedly that larger model size, more data, and more computation (resulting in lower training loss) improves performance. In this paper, we give theoretical backing to these empirical observations by showing that these three properties hold in random feature (RF) regression, a class of models equivalent to shallow networks with only the last layer trained. Concretely, we first show that the test risk of RF regression decreases monotonically with both the number of features and the number of samples, provided the ridge penalty is tuned optimally. In particular, this implies that infinite width RF architectures are preferable to those of any finite width. We then proceed to demonstrate that, for a large class of tasks characterized by powerlaw eigenstructure, training to near-zero training loss is obligatory: near-optimal performance can only be achieved when the training error is much smaller than the test error. Grounding our theory in real-world data, we find empirically that standard computer vision tasks with convolutional neural tangent kernels clearly fall into this class. Taken together, our results tell a simple, testable story of the benefits of overparameterization, overfitting, and more data in random feature models. It is an empirical fact that more is better in modern machine learning. State-of-the-art models are commonly trained with as many parameters and for as many iterations as compute budgets allow, often with little regularization. This ethos of enormous, underregularized models contrasts sharply with the received wisdom of classical statistics, which suggests small, parsimonious models and strong regularization to make training and test losses similar. The development of new theoretical results consistent with the success of overparameterized, underregularized modern machine learning has been a central goal of the field for some years. How might such theoretical results look? Consider the well-tested observation that wider networks virtually always achieve better performance, so long as they are properly tuned (Kaplan et al., 2020; Hoffmann et al., 2022; Yang et al., 2022). Such a result would do much to bring deep learning theory up to date with practice. In this work, we take a first step towards this general result by proving it in the special case of RF regression -- that is, for shallow networks with only the second layer trained. Our Theorem 1 states that, for RF regression, more features (as well as more data) is better, and thus infinite width is best.