Leveraging LLM Embeddings for Cross Dataset Label Alignment and Zero Shot Music Emotion Prediction

Liu, Renhang, Roy, Abhinaba, Herremans, Dorien

arXiv.org Artificial Intelligence 

In this work, we present a novel method for music emotion recognition that leverages Large Language Model (LLM) embeddings for label alignment across multiple datasets and zero-shot prediction on novel categories. First, we compute LLM embeddings for emotion labels and apply non-parametric clustering to group similar labels, across multiple datasets containing disjoint labels. We use these cluster centers to map music features (MERT) to the LLM embedding space. To further enhance the model, we introduce an alignment regularization that enables dissociation of MERT embeddings from different clusters. This further enhances the model's ability to better adaptation to unseen datasets. We demonstrate the effectiveness of our approach by performing zero-shot inference on a new dataset, showcasing its ability to generalize to unseen labels without additional training. The task of automatic music emotion recognition has been a long-standing challenge in the field of music information retrieval (Yang & Chen, 2012; Kim et al., 2010; Kang & Herremans, 2024). Accurately predicting the emotional impact of music has numerous valuable applications, ranging from enhancing music streaming recommendations to providing more effective tools for music therapists.