Butter-Bench: Evaluating LLM Controlled Robots for Practical Intelligence

Sharrock, Callum, Petersson, Lukas, Petersson, Hanna, Backlund, Axel, Wennström, Axel, Nordström, Kristoffer, Aronsson, Elias

arXiv.org Artificial Intelligence 

We present Butter-Bench, a benchmark evaluating large language model (LLM) controlled robots for practical intelligence, defined as the ability to navigate the messiness of the physical world. Current state-of-the-art robotic systems use a hierarchical architecture with LLMs in charge of high-level reasoning, and a Vision Language Action (VLA) model for low-level control. Butter-Bench evaluates the LLM part in isolation from the VLA. Although LLMs have repeatedly surpassed humans in evaluations requiring analytical intelligence, we find humans still outperform LLMs on Butter-Bench. The best LLMs score 40% on Butter-Bench, while the mean human score is 95%. LLMs struggled the most with multi-step spatial planning and social understanding. We also evaluate LLMs that are fine-tuned for embodied reasoning and conclude that this training does not improve their score on Butter-Bench. Language models (LMs) were initially intended for narrow text understanding tasks. The first Transformer-based LM (V aswani et al., 2017) was explicitly trained for translation. However, large-scale training runs of LMs eventually resulted in emergent behaviour - model capabilities that were not explicitly trained for (Brown et al., 2020). For example, LLMs are not trained to be robots, yet companies such as Figure (Helix, 2025) and Google DeepMind (Gemini Robotics 1.5, 2025) use LLMs in their robotic stack.