2 Preliminaries Computational graphLet A be a deterministic algorithm and letFA be a set of deterministic primitiveoperations that can be used byA during execution. Given an inputx, wedefine the
–Neural Information Processing Systems
We analyze the capabilities of Transformer language models in learning compositional discrete tasks. To this end, we evaluate training LLaMA models and prompting GPT-4 and Gemini on four tasks demanding to learn a composition of several discrete sub-tasks. In particular, we measure how well these models can reuse primitives observable in the sub-tasks to learn the composition task.
Neural Information Processing Systems
Feb-7-2026, 22:15:13 GMT
- Country:
- Asia
- Middle East > Jordan (0.04)
- Singapore (0.04)
- Europe
- Germany > Baden-Württemberg
- Tübingen Region > Tübingen (0.04)
- Switzerland > Zürich
- Zürich (0.04)
- Germany > Baden-Württemberg
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > Massachusetts
- Suffolk County > Boston (0.04)
- Canada > Ontario
- Asia
- Genre:
- Research Report (0.93)
- Technology: