Omnigrok: Grokking Beyond Algorithmic Data
Liu, Ziming, Michaud, Eric J., Tegmark, Max
–arXiv.org Artificial Intelligence
Grokking, the unusual phenomenon for algorithmic datasets where generalization happens long after overfitting the training data, has remained elusive. We aim to understand grokking by analyzing the loss landscapes of neural networks, identifying the mismatch between training and test loss landscapes as the cause for grokking. We refer to this as the "LU mechanism" because training and test losses (against model weight norm) typically resemble "L" and "U", respectively. This simple mechanism can nicely explain many aspects of grokking: data size dependence, weight decay dependence, the emergence of representations, etc. Guided by the intuitive picture, we are able to induce grokking on tasks involving images, language and molecules. In the reverse direction, we are able to eliminate grokking for algorithmic datasets. We attribute the dramatic nature of grokking for algorithmic datasets to representation learning. Generalization lies at the heart of machine learning. A good machine learning model should arguably be able to generalize fast, and behave in a smooth/predictable way under changes of (hyper)parameters. Grokking, the phenomenon where the model generalizes long after overfitting the training set, has raised interesting questions after it was observed on algorithmic datasets by (Power et al., 2022): Q1 The origin of grokking: Why is generalization much delayed after overfitting?
arXiv.org Artificial Intelligence
Mar-23-2023
- Country:
- North America > United States
- Massachusetts (0.14)
- Oregon (0.14)
- North America > United States
- Genre:
- Research Report (0.64)
- Technology: