A Latent Variable Framework for Scaling Laws in Large Language Models
Cai, Peiyao, Cui, Chengyu, Polo, Felipe Maia, Somerstep, Seamus, Choshen, Leshem, Yurochkin, Mikhail, Banerjee, Moulinath, Sun, Yuekai, Tan, Kean Ming, Xu, Gongjun
–arXiv.org Artificial Intelligence
We propose a statistical framework built on latent variable modeling for scaling laws of large language models (LLMs). Our work is motivated by the rapid emergence of numerous new LLM families with distinct architectures and training strategies, evaluated on an increasing number of benchmarks. This heterogeneity makes a single global scaling curve inadequate for capturing how performance varies across families and benchmarks. To address this, we propose a latent variable modeling framework in which each LLM family is associated with a latent variable that captures the common underlying features in that family. An LLM's performance on different benchmarks is then driven by its latent skills, which are jointly determined by the latent variable and the model's own observable features. We develop an estimation procedure for this latent variable model and establish its statistical properties. We also design efficient numerical algorithms that support estimation and various downstream tasks. Empirically, we evaluate the approach on 12 widely used benchmarks from the Open LLM Leaderboard (v1/v2).
arXiv.org Artificial Intelligence
Dec-9-2025
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- Michigan (0.04)
- New York > New York County
- New York City (0.04)
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Health & Medicine > Therapeutic Area (0.67)
- Technology: