Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations Colin Wei Stanford University Stanford University Tengyu Ma Stanford University
–Neural Information Processing Systems
Contrastive learning is a highly effective method for learning representations from unlabeled data. Recent works show that contrastive representations can transfer across domains, leading to simple state-of-the-art algorithms for unsupervised domain adaptation. In particular, a linear classifier trained to separate the representations on the source domain can also predict classes on the target domain accurately, even though the representations of the two domains are far from each other. We refer to this phenomenon as linear transferability.
Neural Information Processing Systems
Feb-10-2025, 01:39:19 GMT
- Genre:
- Research Report (0.68)