Lee, Chungpa
A Theoretical Framework for Preventing Class Collapse in Supervised Contrastive Learning
Lee, Chungpa, Oh, Jeongheon, Lee, Kibok, Sohn, Jy-yong
Supervised contrastive learning (SupCL) has emerged as a prominent approach in representation learning, leveraging both supervised and self-supervised losses. However, achieving an optimal balance between these losses is challenging; failing to do so can lead to class collapse, reducing discrimination among individual embeddings in the same class. In this paper, we present theoretically grounded guidelines for SupCL to prevent class collapse in learned representations. Specifically, we introduce the Simplex-to-Simplex Embedding Model (SSEM), a theoretical framework that models various embedding structures, including all embeddings that minimize the supervised contrastive loss. Through SSEM, we analyze how hyperparameters affect learned representations, offering practical guidelines for hyperparameter selection to mitigate the risk of class collapse. Our theoretical findings are supported by empirical results across synthetic and real-world datasets.
A Generalized Theory of Mixup for Structure-Preserving Synthetic Data
Lee, Chungpa, Im, Jongho, Kim, Joseph H. T.
A similar approach, SMOTE (Synthetic Minority Over-sampling Technique) (Chawla et al., 2002; He et al., 2008; Bunkhumpornpat et al., 2012; Douzas et al., 2018), also leverages interpolated synthetic instances to enhance model performance particularly for imbalanced or long-tail distributions, showcasing the effectiveness of mixup methods. In this paper we place special focus on data synthesis, an important constituent of data augmentation. While there is extensive research on how synthetic data generated by mixup can enhance model performance (Carratino et al., 2022; Zhang et al., 2021), less attention has been given to understanding the fundamental properties of the synthesized data itself; see Sec. 2.1. In fact most mixup methods generate linearly interpolated instances by taking a weighted average where the weights are randomly drawn from distributions within the range of [0, 1], such as the beta or the uniform distribution. However, this interpolation process reduces the variance, which inevitably distorts the statistical structure of the original dataset both marginally and jointly. The net effect is a less dispersed dataset with more emphasis on representative instances and suppressing the others. In this regard, mixup-based synthetic datasets achieve better performance in training machine learning models from sacrificing non-representative instances, such as the tail instances, in the dataset.
Analysis of Using Sigmoid Loss for Contrastive Learning
Lee, Chungpa, Chang, Joonhwan, Sohn, Jy-yong
Contrastive learning has emerged as a prominent branch of self-supervised learning for several years. Especially, CLIP, which applies contrastive learning to large sets of captioned images, has garnered significant attention. Recently, SigLIP, a variant of CLIP, has been proposed, which uses the sigmoid loss instead of the standard InfoNCE loss. SigLIP achieves the performance comparable to CLIP in a more efficient manner by eliminating the need for a global view. However, theoretical understanding of using the sigmoid loss in contrastive learning is underexplored. In this paper, we provide a theoretical analysis of using the sigmoid loss in contrastive learning, in the perspective of the geometric structure of learned embeddings. First, we propose the double-Constant Embedding Model (CCEM), a framework for parameterizing various well-known embedding structures by a single variable. Interestingly, the proposed CCEM is proven to contain the optimal embedding with respect to the sigmoid loss. Second, we mathematically analyze the optimal embedding minimizing the sigmoid loss for contrastive learning. The optimal embedding ranges from simplex equiangular-tight-frame to antipodal structure, depending on the temperature parameter used in the sigmoid loss. Third, our experimental results on synthetic datasets coincide with the theoretical results on the optimal embedding structures.