ACPBench Hard: Unrestrained Reasoning about Action, Change, and Planning

Kokel, Harsha, Katz, Michael, Srinivas, Kavitha, Sohrabi, Shirin

arXiv.org Artificial Intelligence 

The ACPBench dataset provides atomic reasoning tasks required for efficient planning. The dataset is aimed at distilling the complex plan generation task into separate atomic reasoning tasks in their easiest possible form, boolean or multiple-choice questions, where the model has to choose the right answer from the provided options. While the aim of ACP-Bench is to test the simplest form of reasoning about action and change, when tasked with planning, a model does not typically have options to choose from and thus the reasoning required for planning dictates an open-ended, generative form for these tasks. To that end, we introduce ACPBench Hard, a generative version of ACPBench, with open-ended questions which the model needs to answer. Models that perform well on these tasks could in principle be integrated into a planner or be used directly as a policy. We discuss the complexity of these tasks as well as the complexity of validating the correctness of their answers and present validation algorithms for each task. Equipped with these validators, we test the performance of a variety of models on our tasks and find that for most of these tasks the performance of even the largest models is still subpar. Our experiments show that no model outperforms another in these tasks and with a few exceptions all tested language models score below 65%, indicating that even the current frontier language models have a long way to go before they can reliably reason about planning. ACPBench Hard collection is available at the following link: https://ibm.github.io/ACPBench. Introduction The ability to reason and plan is the cornerstone of artificial intelligence. With the introduction of large language models, a major focus in the field is on testing their abilities in these two fields, reasoning and planning. For reasoning, the majority of work focuses on the mathematical reasoning (Cobbe et al. 2021) and logical inference (Saparov and He 2023). For planning, most work focused on the ability to produce or validate a plan (V almeekam et al. 2023a; Stein et al. 2024). To tackle this gap, recent work introduced an ACPBench dataset (Kokel et al. 2025), a benchmark for testing the reasoning abilities about action, change, and planning, separating the planning process into the atomic reasoning tasks performed by planners.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found