Human Feedback Driven Dynamic Speech Emotion Recognition
Fedorov, Ilya, Korobchenko, Dmitry
–arXiv.org Artificial Intelligence
This work proposes to explore a new area of dynamic speech emotion recognition. Unlike traditional methods, we assume that each audio track is associated with a sequence of emotions active at different moments in time. The study particularly focuses on the animation of emotional 3D avatars. We propose a multi-stage method that includes the training of a classical speech emotion recognition model, synthetic generation of emotional sequences, and further model improvement based on human feedback. Additionally, we introduce a novel approach to modeling emotional mixtures based on the Dirichlet distribution. The models are evaluated based on ground-truth emotions extracted from a dataset of 3D facial animations. We compare our models against the sliding window approach. Our experimental results show the effectiveness of Dirichlet-based approach in modeling emotional mixtures. Incorporating human feedback further improves the model quality while providing a simplified annotation procedure.
arXiv.org Artificial Intelligence
Aug-22-2025
- Country:
- Europe
- Italy > Tuscany
- Florence (0.04)
- Switzerland (0.04)
- Italy > Tuscany
- North America > United States (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.49)
- Industry:
- Leisure & Entertainment (0.68)
- Technology: