Contrastive Learning of Sociopragmatic Meaning in Social Media
Zhang, Chiyu, Abdul-Mageed, Muhammad, Jawahar, Ganesh
–arXiv.org Artificial Intelligence
Recent progress in representation and contrastive learning in NLP has not widely considered the class of \textit{sociopragmatic meaning} (i.e., meaning in interaction within different language communities). To bridge this gap, we propose a novel framework for learning task-agnostic representations transferable to a wide range of sociopragmatic tasks (e.g., emotion, hate speech, humor, sarcasm). Our framework outperforms other contrastive learning frameworks for both in-domain and out-of-domain data, across both the general and few-shot settings. For example, compared to two popular pre-trained language models, our method obtains an improvement of $11.66$ average $F_1$ on $16$ datasets when fine-tuned on only $20$ training samples per dataset.Our code is available at: https://github.com/UBC-NLP/infodcl
arXiv.org Artificial Intelligence
May-24-2023
- Country:
- Asia (1.00)
- Europe (1.00)
- North America
- Canada (0.67)
- United States > California (0.28)
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (0.93)
- Research Report
- Industry:
- Leisure & Entertainment > Sports (0.45)
- Technology: