MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models
Cheng, Zebang, Niu, Fuqiang, Lin, Yuxiang, Cheng, Zhi-Qi, Zhang, Bowen, Peng, Xiaojiang
–arXiv.org Artificial Intelligence
This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations. We propose a novel Multimodal Emotion Recognition and Multimodal Emotion Cause Extraction (MER-MCE) framework that integrates text, audio, and visual modalities using specialized emotion encoders. Our approach sets itself apart from top-performing teams by leveraging modality-specific features for enhanced emotion understanding and causality inference. Experimental evaluation demonstrates the advantages of our multimodal approach, with our submission achieving a competitive weighted F1 score of 0.3435, ranking third with a margin of only 0.0339 behind the 1st team and 0.0025 behind the 2nd team. Project: https://github.com/MIPS-COLT/MER-MCE.git
arXiv.org Artificial Intelligence
Apr-11-2024
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Emotion (0.39)
- Machine Learning (1.00)
- Natural Language
- Information Retrieval (0.36)
- Large Language Model (0.49)
- Text Processing (0.46)
- Vision > Face Recognition (0.30)
- Information Technology > Artificial Intelligence