Unreflected Use of Tabular Data Repositories Can Undermine Research Quality

Tschalzev, Andrej, Purucker, Lennart, Lüdtke, Stefan, Hutter, Frank, Bartelt, Christian, Stuckenschmidt, Heiner

arXiv.org Artificial Intelligence 

Data repositories have accumulated a large number of tabular datasets from various domains. Machine Learning researchers are actively using these datasets to evaluate novel approaches. Consequently, data repositories have an important standing in tabular data research. They not only host datasets but also provide information on how to use them in supervised learning tasks. In this paper, we argue that, despite great achievements in usability, the unreflected usage of datasets from data repositories may have led to reduced research quality and scientific rigor. We present examples from prominent recent studies that illustrate the problematic use of datasets from OpenML, a large data repository for tabular data. Our illustrations help users of data repositories avoid falling into the traps of (1) using suboptimal model selection strategies, (2) overlooking strong baselines, and (3) inappropriate preprocessing. In response, we discuss possible solutions for how data repositories can prevent the inappropriate use of datasets and become the cornerstones for improved overall quality of empirical research studies. In tabular data research, the OpenML repository is used extensively (Gijsbers et al., 2019; Salinas & Erickson, 2024; Liu et al., 2024; Hollmann et al., 2025). A driving factor for tabular data repository usage is the recent increase in efforts to transfer the success of deep learning to the tabular domain. The development of novel neural network models (Arik & Pfister, 2021; Chang et al., 2021; Gorishniy et al., 2021; 2023; 2024), and more recently tabular foundation models (Gardner et al., 2024; Hollmann et al., 2025) dominates the tabular machine learning community. In response, recent comparative studies try to gather as many datasets as possible to facilitate a rigorous and comprehensive evaluation of novel approaches (Grinsztajn et al., 2022; McElfresh et al., 2023; Ye et al., 2024a). While McElfresh et al. (2023) used 196 datasets, a recent study scales up to 300 datasets from OpenML (Ye et al., 2024a). Similarly, studies evaluating foundation models seem to include as many datasets from these benchmarks as possible, apparently taking their quality and appropriateness for granted (Yan et al., 2024; Gardner et al., 2024). Different authors have recently criticized the intense focus on model development and the limited attention to data quality. Existing benchmarks often use outdated data (Kohli et al., 2024), ignore task-specific preprocessing (Tschalzev et al., 2024), or use inappropriate data splits (Rubachev et al., 2024).