Improving Speech-based Emotion Recognition with Contextual Utterance Analysis and LLMs
Zhang, Enshi, Poellabauer, Christian
–arXiv.org Artificial Intelligence
Speech Emotion Recognition (SER) focuses on identifying emotional states from spoken language. The 2024 IEEE SLT-GenSEC Challenge on Post Automatic Speech Recognition (ASR) Emotion Recognition tasks participants to explore the capabilities of large language models (LLMs) for emotion recognition using only text data. We propose a novel approach that first refines all available transcriptions to ensure data reliability. We then segment each complete conversation into smaller dialogues and use these dialogues as context to predict the emotion of the target utterance within the dialogue. Finally, we investigated different context lengths and prompting techniques to improve prediction accuracy. Our best submission exceeded the baseline by 20% in unweighted accuracy, achieving the best performance in the challenge. All our experiments' codes, prediction results, and log files are publicly available.
arXiv.org Artificial Intelligence
Oct-27-2024
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Health & Medicine (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Emotion (1.00)
- Machine Learning > Neural Networks
- Deep Learning (0.48)
- Natural Language > Large Language Model (1.00)
- Speech (1.00)
- Information Technology > Artificial Intelligence