IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis
Li, Hanyu, Liu, Haoyu, Zhu, Tingyu, Guo, Tianyu, Zheng, Zeyu, Deng, Xiaotie, Jordan, Michael I.
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) show promise as data analysis agents, but existing benchmarks overlook the iterative nature of the field, where experts' decisions evolve with deeper insights of the dataset. To address this, we introduce IDA-Bench, a novel benchmark evaluating LLM agents in multi-round interactive scenarios. Derived from complex Kaggle notebooks, tasks are presented as sequential natural language instructions by an LLM-simulated user. Agent performance is judged by comparing its final numerical output to the human-derived baseline. Initial results show that even state-of-the-art coding agents (like Claude-3.7-thinking) succeed on < 50% of the tasks, highlighting limitations not evident in single-turn tests. This work underscores the need to improve LLMs' multi-round capabilities for building more reliable data analysis agents, highlighting the necessity of achieving a balance between instruction following and reasoning.
arXiv.org Artificial Intelligence
Jun-9-2025
- Country:
- Asia
- Europe > Austria
- Vienna (0.14)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- United States
- California > Alameda County
- Berkeley (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- Hawaii > Honolulu County
- Honolulu (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- California > Alameda County
- Canada > British Columbia
- Oceania > Australia (0.04)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Workflow (1.00)
- Research Report
- Industry:
- Education (0.93)
- Health & Medicine > Therapeutic Area (1.00)
- Transportation (0.69)
- Technology: