Optimizing Pretraining Data Mixtures with LLM-Estimated Utility

Held, William, Paranjape, Bhargavi, Koura, Punit Singh, Lewis, Mike, Zhang, Frank, Mihaylov, Todor

arXiv.org Artificial Intelligence 

Large Language Models improve with increasing amounts of high-quality training data. However, leveraging larger datasets requires balancing quality, quantity, and diversity across sources. After evaluating nine baseline methods under both compute-and data-constrained scenarios, we find token-count heuristics outperform manual and learned mixes, indicating that simple approaches accounting for dataset size and diversity are surprisingly effective. Building on this insight, we propose two complementary approaches: UtiliMax, which extends token-based heuristics by incorporating utility estimates from reduced-scale ablations, achieving up to a 10.6x speedup over manual baselines; and Model Estimated Data Utility (MEDU), which leverages LLMs to estimate data utility from small samples, matching ablation-based performance while reducing computational requirements by 200x Compared to manual (Groeneveld et al., 2024, OLMo), heuristic (Chung et al., 2023, UniMax), and learned (Xie et al., 2024, DoReMi) data mixes, UtiliMax leads to more compute efficient models that perform better on average across tasks. Large Language Model (LLM) pretraining data increasingly consists of sub-corpora from many sources covering multiple domains and varying in size (Gao et al., 2020; Du et al., 2022; TogetherAI, Work completed during an internship at Meta AI. FLOPs from Llama 70B on 2.1 million tokens needed for MEDU using the FLOP equations from Hoffmann et al. (2022) Unlike traditional multi-task learning scenarios, datasets are not necessarily aligned with a specific intended use. Moreover, "intended usage" is often multi-functional as LLMs are being developed for general-purpose functionality (Eloundou et al., 2024; Qin et al., 2023). Given multiple training corpora and multiple downstream goals, how should we sample from each corpus to get the best possible model? Prior work has explored heuristic (Rae et al., 2021; Soldaini et al., 2024) and learned (Xie et al., 2024; Albalak et al., 2023) approaches to solve this. However, there is minimal comparison between these methods using the same data and model configuration. Furthermore, it is unclear whether these approaches are robust to the impacts of epoching which is critical as frontier models are increasingly data-constrained (Villalobos et al., 2024; Longpre et al., 2024).