Rethinking Test-Time Training: Tilting The Latent Distribution For Few-Shot Source-Free Adaptation
Syed, Tahir Qasim, Khan, Behraj
Often, constraints arise in deployment settings where even lightweight parameter updates e.g. parameter-efficient fine-tuning could induce model shift or tuning instability. We study test-time adaptation of foundation models for few-shot classification under a completely frozen-model regime, where additionally, no upstream data are accessible. We propose arguably the first training-free inference method that adapts predictions to the new task by performing a change of measure over the latent embedding distribution induced by the encoder. Using task-similarity scores derived from a small labeled support set, exponential tilting reweights latent distributions in a KL-optimal manner without modifying model parameters. Empirically, the method consistently competes with parameter-update-based methods across multiple benchmarks and shot regimes, while operating under strictly and universally stronger constraints. These results demonstrate the viability of inference-level distributional correction for test-time adaptation even with a fully-frozen model pipeline.
Feb-4-2026
- Country:
- Asia > Pakistan
- Sindh > Karachi Division > Karachi (0.04)
- North America > United States
- California > Santa Clara County > Stanford (0.04)
- Asia > Pakistan
- Genre:
- Research Report > New Finding (0.48)
- Technology: