Mitra: Mixed Synthetic Priors for Enhancing Tabular Foundation Models
Zhang, Xiyuan, Maddix, Danielle C., Yin, Junming, Erickson, Nick, Ansari, Abdul Fatir, Han, Boran, Zhang, Shuai, Akoglu, Leman, Faloutsos, Christos, Mahoney, Michael W., Hu, Cuixiong, Rangwala, Huzefa, Karypis, George, Wang, Bernie
–arXiv.org Artificial Intelligence
Since the seminal work of TabPFN, research on tabular foundation models (TFMs) based on in-context learning (ICL) has challenged long-standing paradigms in machine learning. Without seeing any real-world data, models pretrained on purely synthetic datasets generalize remarkably well across diverse datasets, often using only a moderate number of in-context examples. This shifts the focus in tabular machine learning from model architecture design to the design of synthetic datasets, or, more precisely, to the prior distributions that generate them. Yet the guiding principles for prior design remain poorly understood. This work marks the first attempt to address the gap. We systematically investigate and identify key properties of synthetic priors that allow pretrained TFMs to generalize well. Based on these insights, we introduce Mitra, a TFM trained on a curated mixture of synthetic priors selected for their diversity, distinctiveness, and performance on real-world tabular data. Mitra consistently outperforms state-of-the-art TFMs, such as TabPFNv2 and TabICL, across both classification and regression benchmarks, with better sample efficiency.
arXiv.org Artificial Intelligence
Oct-27-2025
- Country:
- North America > Montserrat (0.04)
- Genre:
- Research Report > New Finding (0.93)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Ensemble Learning (0.69)
- Neural Networks (1.00)
- Performance Analysis > Accuracy (0.92)
- Statistical Learning (0.95)
- Natural Language (0.93)
- Machine Learning
- Data Science > Data Mining (1.00)
- Artificial Intelligence
- Information Technology