Emotion Manipulation Through Music -- A Deep Learning Interactive Visual Approach

Abdalla, Adel N., Osborne, Jared, Andonie, Razvan

arXiv.org Artificial Intelligence 

In recent years, the fields of Music Information Retrieval (MIR) and Music Emotion Recognition (MER) have received significant attention, leading to multiple advances in how music is analyzed [1, 2]. These developments have increased the accuracy in determining what emotions are present in a given music sample, but the current state of the art is only now passing 75% through the use of Random Forest and Support Vector Machine models [3]. This is in contrast to the field of speech recognition, where current models are approaching 100% accuracy across hundreds of languages for word identification [4] and 85% for standard speech emotion recognition [5]. The additional challenges in music recognition come from the nature of music itself as the lyrical and emotional content of a vocalist's contribution are only one part of the whole. Tempo, rhythm, timbre, instrumentation choice, perceived genre, and other factors contribute together to shape the emotional and tonal landscape of any given work into a unique blend that is interpreted subjectively by individual listeners [6]. The goal of our paper is to show that by changing the underlying structure of a small subset of musical features of any given musical piece, we can adjust the perceived emotional content of the work towards a specific desired emotion.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found