Multi-modal Anchor Gated Transformer with Knowledge Distillation for Emotion Recognition in Conversation
Li, Jie, Ding, Shifei, Guo, Lili, Li, Xuan
–arXiv.org Artificial Intelligence
Emotion Recognition in Conversation (ERC) aims to detect the emotions of individual utterances within a conversation. Generating efficient and modality-specific representations for each utterance remains a significant challenge. Previous studies have proposed various models to integrate features extracted using different modality-specific encoders. However, they neglect the varying contributions of modalities to this task and introduce high complexity by aligning modalities at the frame level. To address these challenges, we propose the Multi-modal Anchor Gated Transformer with Knowledge Distillation (MAGTKD) for the ERC task. Specifically, prompt learning is employed to enhance textual modality representations, while knowledge distillation is utilized to strengthen representations of weaker modalities. Furthermore, we introduce a multi-modal anchor gated transformer to effectively integrate utterance-level representations across modalities. Extensive experiments on the IEMOCAP and MELD datasets demonstrate the effectiveness of knowledge distillation in enhancing modality representations and achieve state-of-the-art performance in emotion recognition. Our code is available at: https://github.com/JieLi-dd/
arXiv.org Artificial Intelligence
Jun-24-2025
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Emotion (0.86)
- Machine Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence