emc 2
EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Yau, Chung-Yiu, Wai, Hoi-To, Raman, Parameswaran, Sarkar, Soumajyoti, Hong, Mingyi
Contrastive representation learning has been instrumental in self-supervised learning for large-scale pretraining of foundation models Radford et al. (2021); Cherti et al. (2023) as well as in the fine-tuning stage on downstream tasks Xiong et al. (2020); Lindgren et al. (2021). It helps encode real-world data into lowdimensional feature vectors that abstract the important attributes about the data, and generalize well outside of the training distribution. More recently, contrastive learning with multi-modal data has helped embed different data modalities into the same feature space Li et al. (2023), such as the studies with visual-language models Radford et al. (2021); Alayrac et al. (2022); Cherti et al. (2023) and document understanding Xu et al. (2020); Lee et al. (2023). Contrastive learning uses pairwise comparison of representations in the training objective, with the goal of learning representations of data where positive pairs are drawn closer while negative pairs move apart in the representation space. It is well known that generating a large dataset of pairwise samples such as image-text pairs of the same semantics costs much lower than manual labeling, e.g., the WebImageText dataset used for training CLIP originates from Wikipedia articles Radford et al. (2021).
- North America > United States > Minnesota (0.04)
- Asia > China > Hong Kong (0.04)
EMC2-Net: Joint Equalization and Modulation Classification based on Constellation Network
Modulation classification (MC) is the first step performed at the receiver side unless the modulation type is explicitly indicated by the transmitter. Machine learning techniques have been widely used for MC recently. In this paper, we propose a novel MC technique dubbed as Joint Equalization and Modulation Classification based on Constellation Network (EMC2-Net). Unlike prior works that considered the constellation points as an image, the proposed EMC2-Net directly uses a set of 2D constellation points to perform MC. In order to obtain clear and concrete constellation despite multipath fading channels, the proposed EMC2-Net consists of equalizer and classifier having separate and explainable roles via novel three-phase training and noise-curriculum pretraining. Numerical results with linear modulation types under different channel models show that the proposed EMC2-Net achieves the performance of state-of-the-art MC techniques with significantly less complexity.