Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
–Neural Information Processing Systems
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm, which learns representations by pushing positive pairs, or similar examples from the same class, closer together while keeping negative pairs far apart. Despite the empirical successes, theoretical foundations are limited -- prior analyses assume conditional independence of the positive pairs given the same class label, but recent empirical applications use heavily correlated positive pairs (i.e., data augmentations of the same image).
Neural Information Processing Systems
Dec-23-2025, 21:58:01 GMT
- Technology: