Goto

Collaborating Authors

 swap 0


859555c74e9afd45ab771c615c1e49a6-Supplemental.pdf

Neural Information Processing Systems

In Section 3, we briefly described four Common Core-inspired environments: equations, fractions, ternary-addition and sorting. This dataset contains logs of student interactions with an automated algebra tutor. With 50%chance, we generate anumber. Anumber isgenerated byfirst picking the number ofprime factors (between 0 and 4), then drawing each factor independently from the set{2,3,5,7} and multiplying them. A state is solved when the final number can be readily read from the state: all digits must multiply different powers, they must be sorted by power, and there should be no zero digits.


Exploring State Tracking Capabilities of Large Language Models

Rezaee, Kiamehr, Camacho-Collados, Jose, Pilehvar, Mohammad Taher

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have demonstrated impressive capabilities in solving complex tasks, including those requiring a certain level of reasoning. In this paper, we focus on state tracking, a problem where models need to keep track of the state governing a number of entities. To isolate the state tracking component from other factors, we propose a benchmark based on three well-defined state tracking tasks and analyse the performance of LLMs in different scenarios. The results indicate that the recent generation of LLMs (specifically, GPT-4 and Llama3) are capable of tracking state, especially when integrated with mechanisms such as Chain of Thought. However, models from the former generation, while understanding the task and being able to solve it at the initial stages, often fail at this task after a certain number of steps.