Can Language Models Compose Skills In-Context?
Liu, Zidong, Xu, Zhuoyan, Shi, Zhenmei, Liang, Yingyu
–arXiv.org Artificial Intelligence
Composing basic skills from simple tasks to accomplish composite tasks is crucial for modern intelligent systems. We investigate the in-context composition ability of language models to perform composite tasks that combine basic skills demonstrated in in-context examples. This is more challenging than the standard setting, where skills and their composition can be learned in training. We conduct systematic experiments on various representative open-source language models, utilizing linguistic and logical tasks designed to probe composition abilities. The results reveal that simple task examples can have a surprising negative impact on the performance, because the models generally struggle to recognize and assemble the skills correctly, even with Chain-of-Thought examples. Theoretical analysis further shows that it is crucial to align examples with the corresponding steps in the composition. This inspires a method for the probing tasks, whose improved performance provides positive support for our insights.
arXiv.org Artificial Intelligence
Oct-28-2025
- Country:
- Asia
- Europe > Ireland
- Leinster > County Dublin > Dublin (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- Washington > King County
- Seattle (0.04)
- Wisconsin > Dane County
- Madison (0.04)
- Washington > King County
- Canada > Ontario
- South America > Colombia
- Meta Department > Villavicencio (0.04)
- Genre:
- Research Report > New Finding (1.00)
- Technology: