jcl
- North America > Canada (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
Joint Contrastive Learning with Infinite Possibilities
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling. We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL). JCL implicitly involves the simultaneous learning of an infinite number of query-key pairs, which poses tighter constraints when searching for invariant features. We derive an upper bound on this formulation that allows analytical solutions in an end-to-end training manner. While JCL is practically effective in numerous computer vision applications, we also theoretically unveil the certain mechanisms that govern the behavior of JCL. We demonstrate that the proposed formulation harbors an innate agency that strongly favors similarity within each instance-specific class, and therefore remains advantageous when searching for discriminative features among distinct instances. We evaluate these proposals on multiple benchmarks, demonstrating considerable improvements over existing algorithms.
- North America > Canada (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.05)
- Asia > China > Beijing > Beijing (0.05)
- Asia > China > Anhui Province > Hefei (0.05)
The novel practical
The top-1 accuracy of JCL pre-trained features is 48.6%, which outperforms MoCo v2 (47.3%). Generalization of JCL for other data modalities (sound, language, video) will be included in our future work. Regarding your concerns of the written quality and typos (e.g., Algorithm 1 The top-1 accuracy on ImageNet100 for vanilla (ResNet-50) is 80.9% while JCL achieves 82.0%. The top-5 accuracy we reported (87.3%) for SimCLR was extracted from the Thus, there is no one-one correspondence between the data in Table1 and Figure2. MS COCO for object detection & instance segmentation tasks.
Review for NeurIPS paper: Joint Contrastive Learning with Infinite Possibilities
Additional Feedback: I think it is too strong to claim that "we also theoretically unveil the certain important mechanisms that govern the behavior of JCL." The main theoretical tool in the proposed method is an application of Jensen's inequality. There is also a section (3.3) that discusses some very basic properties of the the objective. To claim any of this as a significant "theoretical contribution" is too strong in my view. To me, the most interesting aspect of Fig2 is part (b).
Joint Contrastive Learning with Infinite Possibilities
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling. We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL). JCL implicitly involves the simultaneous learning of an infinite number of query-key pairs, which poses tighter constraints when searching for invariant features. We derive an upper bound on this formulation that allows analytical solutions in an end-to-end training manner. While JCL is practically effective in numerous computer vision applications, we also theoretically unveil the certain mechanisms that govern the behavior of JCL.
Multi-level Asymmetric Contrastive Learning for Medical Image Segmentation Pre-training
Zeng, Shuang, Zhu, Lei, Zhang, Xinliang, Tian, Zifeng, Chen, Qian, Jin, Lujia, Wang, Jiayi, Lu, Yanye
Contrastive learning, which is a powerful technique for learning image-level representations from unlabeled data, leads a promising direction to dealing with the dilemma between large-scale pre-training and limited labeled data. However, most existing contrastive learning strategies are designed mainly for downstream tasks of natural images, therefore they are sub-optimal and even worse than learning from scratch when directly applied to medical images whose downstream tasks are usually segmentation. In this work, we propose a novel asymmetric contrastive learning framework named JCL for medical image segmentation with self-supervised pre-training. Specifically, (1) A novel asymmetric contrastive learning strategy is proposed to pre-train both encoder and decoder simultaneously in one-stage to provide better initialization for segmentation models. (2) A multi-level contrastive loss is designed to take the correspondence among feature-level, image-level and pixel-level projections, respectively into account to make sure multi-level representations can be learned by the encoder and decoder during pre-training. (3) Experiments on multiple medical image datasets indicate our JCL framework outperforms existing SOTA contrastive learning strategies.
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (2 more...)