NC-TTT: A Noise Contrastive Approach for Test-Time Training

Osowiechi, David, Hakim, Gustavo A. Vargas, Noori, Mehrdad, Cheraghalikhani, Milad, Bahri, Ali, Yazdanpanah, Moslem, Ayed, Ismail Ben, Desrosiers, Christian

arXiv.org Artificial Intelligence 

A crucial requirement for the success of traditional deep learning methods is that training and testing data should be sampled from the same distribution. As widely shown in the literature Recht et al. [2018], Peng et al. [2018], this assumption rarely holds in practice, and a model's performance can drop dramatically in the presence of domain shifts. The field of Domain Adaptation (DA) has emerged to address this important issue, proposing various mechanisms that adapt learning algorithms to new domains. In the realm of domain adaptation, two notable directions of research have surfaced: Domain Generalization and Test-Time Adaptation. Domain Generalization (DG) approaches Volpi et al. [2018], Prakash et al. [2019], Zhou et al. [2020], Kim et al. [2022], Wang et al. [2022] typically train a model with an extensive source dataset encompassing diverse domains and augmentations, so that it can achieve a good performance on test examples from unseen domains, without retraining. Conversely, Test-Time Adaptation (TTA) Wang et al. [2021], Khurana et al. [2021], Boudiaf et al. [2022] entails the dynamic adjustment of the model to test data in real-time, typically adapting to subsets of the new domain, such as mini-batches. TTA presents a challenging, yet practical problem as it functions without supervision for test samples or access to the source domain data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found