Goto

Collaborating Authors

Is it A Horror Film or a Rom-Com? AI Can Predict Based Solely on Music. - USC Viterbi

#artificialintelligence

Study authors include Professor Shrikanth Narayanan, Timothy Greer, Dillon Knox, and Benjamin Ma. (Images Courtesy of Narayanan, Greer, Knox, and Ma) Music is an indispensable element in film: it establishes atmosphere and mood, drives the viewer's emotional reactions, and significantly influences the audience's interpretation of the story. In a recent paper published in PLOS One, a research team at the USC Viterbi School of Engineering, led by Professor Shrikanth Narayanan, sought to objectively examine the effect of music on cinematic genres. Their study aimed to determine if AI-based technology could predict the genre of a film based on the soundtrack alone. "By better understanding how music affects the viewer's perception of a film, we gain insights into how film creators can reach their audience in a more compelling way," said Narayanan, University Professor and Niki and Max Nikias Chair in Engineering, professor of electrical and computer engineering and computer science and the director of USC Viterbi's Signal Analysis and Interpretation Laboratory (SAIL). The notion that different film genres are more likely to use certain musical elements in their soundtrack is rather intuitive: a lighthearted romance might include rich string passages and lush, lyrical melodies, while a horror film might instead feature unsettling, piercing frequencies and eerily discordant notes.


Why Music Makes Us Feel (According to AI)

#artificialintelligence

In the neuroimaging experiment, 40 volunteers listened to a series of sad or happy musical excerpts, while their brains were scanned using MRI. This was conducted at USC's Brain and Creativity Institute by Assal Habibi, an assistant professor of psychology at USC Dornsife College of Letters, Arts and Sciences, and her team, including Matthew Sachs, a postdoctoral scholar currently at Columbia University. To measure physical reaction, 60 people listened to music on headphones, while their heart activity and skin conductance were measured. The same group also rated the intensity of emotion (happy or sad) from 1 to 10 while listening to the music. Then, the computer scientists crunched the data using AI algorithms to determine which auditory features people responded to consistently.


Why Music Makes Us Feel, According to AI - Express Computer

#artificialintelligence

In the neuroimaging experiment, 40 volunteers listened to a series of sad or happy musical excerpts, while their brains were scanned using MRI. This was conducted at USC's Brain and Creativity Institute by Assal Habibi, an assistant professor of psychology at USC Dornsife, and her team, including Matthew Sachs, a postdoctoral scholar currently at Columbia University. To measure physical reaction, 60 people listened to music on headphones, while their heart activity and skin conductance were measured. The same group also rated the intensity of emotion (happy or sad) from 1 to 10 while listening to the music. Then, the computer scientists crunched the data using AI algorithms to determine which auditory features people responded to consistently.


Understanding Salsa

Communications of the ACM

Latin America, with its rich and varied cultural heritage, is a region widely known by its diverse musical rhythms. Indeed, music and dance constitute an important part of Latin American cultural assets and identity.2 Some of these rhythms, although famous worldwide, belong to specific regions; for example, samba is from Brazil, tango is from Argentina, merengue is from the Dominican Republic, corrido is from Mexico and vallenato is from Colombia, among many other examples. Most of them were created by the cultural interaction between people from African, Native American, and European cultures that shared their music and instruments. Those heterogeneous cultural characteristics made these music styles appealing to an international audience.


Lukthung Classification Using Neural Networks on Lyrics and Audios

arXiv.org Machine Learning

--Music genre classification is a widely researched topic in music information retrieval (MIR). Being able to automatically tag genres will benefit music streaming service providers such as JOOX, Apple Music, and Spotify for their content-based recommendation. However, most studies on music classification have been done on western songs which differ from Thai songs. Lukthung, a distinctive and long-established type of Thai music, is one of the most popular music genres in Thailand and has a specific group of listeners. In this paper, we develop neural networks to classify such Lukthung genre from others using both lyrics and audios. Words used in Lukthung songs are particularly poetical, and their musical styles are uniquely composed of traditional Thai instruments. We leverage these two main characteristics by building a lyrics model based on bag-of- words (BoW), and an audio model using a convolutional neural network (CNN) architecture. We then aggregate the intermediate features learned from both models to build a final classifier .