What is an "Abstract Reasoner"? Revisiting Experiments and Arguments about Large Language Models
Yun, Tian, Sun, Chen, Pavlick, Ellie
–arXiv.org Artificial Intelligence
Recent work has argued that large language models (LLMs) are not "abstract reasoners", citing their poor zero-shot performance on a variety of challenging tasks as evidence. We revisit these experiments in order to add nuance to the claim. First, we show that while LLMs indeed perform poorly in a zero-shot setting, even tuning a small subset of parameters for input encoding can enable near-perfect performance. However, we also show that this finetuning does not necessarily transfer across datasets. We take this collection of empirical results as an invitation to (re-)open the discussion of what it means to be an "abstract reasoner", and why it matters whether LLMs fit the bill.
arXiv.org Artificial Intelligence
Jul-31-2025
- Country:
- North America > United States > Massachusetts (0.04)
- Genre:
- Research Report > New Finding (0.46)
- Technology: