Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning
–Neural Information Processing Systems
We show that bringing intermediate layers' representations of two augmented versions of an image closer together in self supervised learning helps to improve the momentum contrastive (MoCo) method. To this end, in addition to the contrastive loss, we minimize the mean squared error between the intermediate layer representations or make their cross-correlation matrix closer to an identity matrix. Both loss objectives either outperform standard MoCo, or achieve similar performances on three diverse medical imaging datasets: NIH-Chest Xrays, Breast Cancer Histopathology, and Diabetic Retinopathy. The gains of the improved MoCo are especially large in a low-labeled data regime (e.g.
Neural Information Processing Systems
Feb-10-2025, 18:09:27 GMT
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.90)
- Therapeutic Area > Oncology (0.89)
- Health & Medicine
- Technology: