Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data
Kai Helli, David Schnurr, Noah Hollmann, Samuel Müller, Frank Hutter
–Neural Information Processing Systems
While most ML models expect independent and identically distributed data, this assumption is often violated in real-world scenarios due to distribution shifts, resulting in the degradation of machine learning model performance. Until now, no tabular method has consistently outperformed classical supervised learning, which ignores these shifts. To address temporal distribution shifts, we present Drift-Resilient TabPFN, a fresh approach based on In-Context Learning with a Prior-Data Fitted Network that learns the learning algorithm itself: it accepts the entire training dataset as input and makes predictions on the test set in a single forward pass. Specifically, it learns to approximate Bayesian inference on synthetic datasets drawn from a prior that specifies the model's inductive bias. This prior is based on structural causal models (SCM), which gradually shift over time.
Neural Information Processing Systems
Mar-27-2025, 01:08:12 GMT
- Country:
- Asia (0.67)
- Europe > Germany
- Baden-Württemberg (0.14)
- North America > United States
- South America > Brazil
- São Paulo (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Banking & Finance (0.92)
- Health & Medicine > Therapeutic Area
- Cardiology/Vascular Diseases (0.93)
- Endocrinology (0.68)
- Transportation (0.92)