Frustratingly Easy Test-Time Adaptation of Vision-Language Models
–Neural Information Processing Systems
Vision-Language Models seamlessly discriminate among arbitrary semantic categories, yet they still suffer from poor generalization when presented with challenging examples. For this reason, Episodic Test-Time Adaptation (TTA) strategies have recently emerged as powerful techniques to adapt VLMs in the presence of a single unlabeled image. The recent literature on TTA is dominated by the paradigm of prompt tuning by Marginal Entropy Minimization, which, relying on online backpropagation, inevitably slows down inference while increasing memory. In this work, we theoretically investigate the properties of this approach and unveil that a surprisingly strong TTA method lies dormant and hidden within it. We term this approach ZERO (TTA with "zero" temperature), whose design is both incredibly effective and frustratingly simple: augment N times, predict, retain the most confident predictions, and marginalize after setting the Softmax temperature to zero.
Neural Information Processing Systems
May-27-2025, 20:19:17 GMT
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (0.79)
- Natural Language (0.97)
- Vision (0.64)
- Information Technology > Artificial Intelligence