Multi-Modal Sentiment Analysis with Dynamic Attention Fusion
Abdulhalim, Sadia, Albaghdadi, Muaz, Farazi, Moshiur
–arXiv.org Artificial Intelligence
Abstract--Traditional sentiment analysis has long been a unimodal task, relying solely on text. This approach overlooks nonverbal cues such as vocal tone and prosody that are essential for capturing true emotional intent. We introduce Dynamic Attention Fusion (DAF), a lightweight framework that combines frozen text embeddings from a pretrained language model with acoustic features from a speech encoder, using an adaptive attention mechanism to weight each modality per utterance. Without any fine-tuning of the underlying encoders, our proposed DAF model consistently outperforms both static fusion and unimodal baselines on a large multimodal benchmark. We report notable gains in F1-score and reductions in prediction error and perform a variety of ablation studies that support our hypothesis that the dynamic weighting strategy is crucial for modeling emotionally complex inputs. By effectively integrating verbal and non-verbal information, our approach offers a more robust foundation for sentiment prediction and carries broader impact for affective computing applications--from emotion recognition and mental health assessment to more natural human-computer interaction. Sentiment analysis is a multimodal AI task that focuses on identifying and interpreting human emotions, opinions, and attitudes from various types of input modalities of data.
arXiv.org Artificial Intelligence
Sep-30-2025
- Country:
- Asia > Middle East > Qatar (0.14)
- Genre:
- Research Report (0.83)
- Industry:
- Health & Medicine > Consumer Health (0.34)
- Technology: