2 Preliminaries Computational graphLet A be a deterministic algorithm and letFA be a set of deterministic primitiveoperations that can be used byA during execution. Given an inputx, wedefine the

Neural Information Processing Systems 

We analyze the capabilities of Transformer language models in learning compositional discrete tasks. To this end, we evaluate training LLaMA models and prompting GPT-4 and Gemini on four tasks demanding to learn a composition of several discrete sub-tasks. In particular, we measure how well these models can reuse primitives observable in the sub-tasks to learn the composition task.