Toward Understanding In-context vs. In-weight Learning
Chan, Bryan, Chen, Xinyi, György, András, Schuurmans, Dale
–arXiv.org Artificial Intelligence
It has recently been demonstrated empirically that in-context learning emerges in transformers when certain distributional properties are present in the training data, but this ability can also diminish upon further training. We provide a new theoretical understanding of these phenomena by identifying simplified distributional properties that give rise to the emergence and eventual disappearance of in-context learning. We do so by first analyzing a simplified model that uses a gating mechanism to choose between an in-weight and an in-context predictor. Through a combination of a generalization error and regret analysis we identify conditions where in-context and in-weight learning emerge. These theoretical findings are then corroborated experimentally by comparing the behaviour of a full transformer on the simplified distributions to that of the stylized model, demonstrating aligned results. We then extend the study to a full large language model, showing how fine-tuning on various collections of natural language prompts can elicit similar in-context and in-weight learning behaviour.
arXiv.org Artificial Intelligence
Oct-30-2024
- Country:
- Europe
- Hungary (0.28)
- United Kingdom > England (0.14)
- North America
- Canada > Alberta (0.14)
- United States > New York (0.14)
- Europe
- Genre:
- Research Report (0.82)
- Technology: