TabFlex: Scaling Tabular Learning to Millions with Linear Attention
Zeng, Yuchen, Dinh, Tuan, Kang, Wonjun, Mueller, Andreas C
–arXiv.org Artificial Intelligence
Leveraging the in-context learning (ICL) capability of Large Language Models (LLMs) for tabular classification has gained significant attention for its training-free adaptability across diverse datasets. Recent advancements, like TabPFN, excel in small-scale tabular datasets but struggle to scale for large and complex datasets. Our work enhances the efficiency and scalability of TabPFN for larger datasets by incorporating linear attention mechanisms as a scalable alternative to complexity-quadratic self-attention. Our model, TabFlex, efficiently handles tabular datasets with thousands of features and hundreds of classes, scaling seamlessly to millions of samples. For instance, TabFlex processes the poker-hand dataset with over a million samples in just 5 seconds. Our extensive evaluations demonstrate that TabFlex can achieve over a 2x speedup compared to TabPFN and a 1.5x speedup over XGBoost, outperforming 25 tested baselines in terms of efficiency across a diverse range of datasets. Furthermore, TabFlex remains highly effective on large-scale datasets, delivering strong performance with significantly reduced computational costs, especially when combined with data-efficient techniques such as dimensionality reduction and data sampling.
arXiv.org Artificial Intelligence
Jun-9-2025
- Country:
- Asia
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- South Korea > Seoul
- Seoul (0.04)
- Japan > Honshū
- North America
- Canada (0.04)
- Mexico > Gulf of Mexico (0.04)
- Montserrat (0.04)
- United States
- California > San Francisco County
- San Francisco (0.14)
- Wisconsin > Dane County
- Madison (0.04)
- California > San Francisco County
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine > Therapeutic Area (1.00)
- Technology: