Scaling Laws for Pre-training Agents and World Models
Pearce, Tim, Rashid, Tabish, Bignell, Dave, Georgescu, Raluca, Devlin, Sam, Hofmann, Katja
–arXiv.org Artificial Intelligence
The performance of embodied agents has been shown to improve by increasing model parameters, dataset size, and compute. This has been demonstrated in domains from robotics to video games, when generative learning objectives on offline datasets (pre-training) are used to model an agent's behavior (imitation learning) or their environment (world modeling). This paper characterizes the role of scale in these tasks more precisely. Going beyond the simple intuition that `bigger is better', we show that the same types of power laws found in language modeling also arise in world modeling and imitation learning (e.g. between loss and optimal model size). However, the coefficients of these laws are heavily influenced by the tokenizer, task \& architecture -- this has important implications on the optimal sizing of models and data.
arXiv.org Artificial Intelligence
Dec-18-2024
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Leisure & Entertainment > Games > Computer Games (0.34)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (0.66)
- Large Language Model (0.70)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence