Learning curves theory for hierarchically compositional data with power-law distributed features
Cagnetta, Francesco, Kang, Hyunmo, Wyart, Matthieu
Recent theories suggest that Neural Scaling Laws arise whenever the task is linearly decomposed into power-law distributed units. Alternatively, scaling laws also emerge when data exhibit a hierarchically compositional structure, as is thought to occur in language and images. To unify these views, we consider classification and next-token prediction tasks based on probabilistic context-free grammars -- probabilistic models that generate data via a hierarchy of production rules. For classification, we show that having power-law distributed production rules results in a power-law learning curve with an exponent depending on the rules' distribution and a large multiplicative constant that depends on the hierarchical structure. By contrast, for next-token prediction, the distribution of production rules controls the local details of the learning curve, but not the exponent describing the large-scale behaviour.
May-13-2025
- Country:
- Asia > South Korea
- Europe
- Italy > Friuli Venezia Giulia
- Trieste Province > Trieste (0.04)
- Switzerland (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Italy > Friuli Venezia Giulia
- North America
- Canada (0.04)
- United States > Maryland
- Baltimore (0.04)
- Genre:
- Research Report (0.64)
- Technology: