MastermindEval: A Simple But Scalable Reasoning Benchmark

Golde, Jonas, Haller, Patrick, Barth, Fabio, Akbik, Alan

arXiv.org Artificial Intelligence 

Recent advancements in large language models (LLMs) have led to remarkable performance across a wide range of language understanding and mathematical tasks. As a result, increasing attention has been given to assessing the true reasoning capabilities of LLMs, driving research into commonsense, numerical, logical, and qualitative reasoning. However, with the rapid progress of reasoning-focused models such as OpenAI's o1 and DeepSeek's R1, there has been a growing demand for reasoning benchmarks that can keep pace with ongoing model developments. Our benchmark supports two evaluation paradigms: (1) agentic evaluation, in which the model autonomously plays the game, and (2) deductive reasoning evaluation, in which the model is given a pre-played game state with only one possible valid code to infer. In our experimental results we (1) find that even easy Mastermind instances are difficult for current models and (2) demonstrate that the benchmark is scalable to possibly more advanced models in the future Furthermore, we investigate possible reasons why models cannot deduce the final solution and find that current models are limited in deducing the concealed code as the number of statement to combine information from is increasing. Large language models (LLMs) have demonstrated remarkable performance across various text generation tasks, spanning both text and vision modalities (Grattafiori et al., 2024). These models, characterized by their large parameter counts, have proven effective in a wide range of language understanding tasks (Brown et al., 2020; Zhao et al., 2024).