A Novel Approach to for Multimodal Emotion Recognition : Multimodal semantic information fusion
Dai, Wei, Zheng, Dequan, Yu, Feng, Zhang, Yanrong, Hou, Yaohui
–arXiv.org Artificial Intelligence
With the rapid development of artificial intelligence and computer vision technologies, emotion recognition has become an important research direction in various fields such as human-computer interaction (HCI), intelligent customer service, and mental health monitoring [Poria et al., 2017a]. The goal of emotion recognition is to analyze an individual's emotional state through multimodal information, such as speech, text, and visual data, to achieve emotional understanding in intelligent systems. However, traditional emotion recognition methods mainly focus on feature extraction and emotion classification from a single modality, which limits their effectiveness in complex real-world applications. In recent years, with the continuous advancement of multimodal learning and deep learning technologies, multimodal emotion recognition (MER) has gradually become a research hotspot. MER improves the accuracy and robustness of emotion classification by integrating multiple data sources.
arXiv.org Artificial Intelligence
Feb-12-2025
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > California (0.14)
- Canada > Ontario
- North America
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.54)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Emotion (1.00)
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence