Goto

Collaborating Authors

Lukthung Classification Using Neural Networks on Lyrics and Audios

arXiv.org Machine Learning

--Music genre classification is a widely researched topic in music information retrieval (MIR). Being able to automatically tag genres will benefit music streaming service providers such as JOOX, Apple Music, and Spotify for their content-based recommendation. However, most studies on music classification have been done on western songs which differ from Thai songs. Lukthung, a distinctive and long-established type of Thai music, is one of the most popular music genres in Thailand and has a specific group of listeners. In this paper, we develop neural networks to classify such Lukthung genre from others using both lyrics and audios. Words used in Lukthung songs are particularly poetical, and their musical styles are uniquely composed of traditional Thai instruments. We leverage these two main characteristics by building a lyrics model based on bag-of- words (BoW), and an audio model using a convolutional neural network (CNN) architecture. We then aggregate the intermediate features learned from both models to build a final classifier .


Audio Data Analysis Using Deep Learning with Python (Part 1) - KDnuggets

#artificialintelligence

While much of the literature and buzz on deep learning concerns computer vision and natural language processing(NLP), audio analysis -- a field that includes automatic speech recognition(ASR), digital signal processing, and music classification, tagging, and generation -- is a growing subdomain of deep learning applications. Some of the most popular and widespread machine learning systems, virtual assistants Alexa, Siri, and Google Home, are largely products built atop models that can extract information from audio signals. Audio data analysis is about analyzing and understanding audio signals captured by digital devices, with numerous applications in the enterprise, healthcare, productivity, and smart cities. Applications include customer satisfaction analysis from customer support calls, media content analysis and retrieval, medical diagnostic aids and patient monitoring, assistive technologies for people with hearing impairments, and audio analysis for public safety. In the first part of this article series, we will talk about all you need to know before getting started with the audio data analysis and extract necessary features from a sound/audio file. We will also build an Artificial Neural Network(ANN) for the music genre classification.


Transfer Learning of Artist Group Factors to Musical Genre Classification

arXiv.org Machine Learning

The automated recognition of music genres from audio information is a challenging problem, as genre labels are subjective and noisy. Artist labels are less subjective and less noisy, while certain artists may relate more strongly to certain genres. At the same time, at prediction time, it is not guaranteed that artist labels are available for a given audio segment. Therefore, in this work, we propose to apply the transfer learning framework, learning artist-related information which will be used at inference time for genre classification. We consider different types of artist-related information, expressed through artist group factors, which will allow for more efficient learning and stronger robustness to potential label noise. Furthermore, we investigate how to achieve the highest validation accuracy on the given FMA dataset, by experimenting with various kinds of transfer methods, including single-task transfer, multi-task transfer and finally multi-task learning.


MUSIC CLASSIFICATION USING ARTIFICIAL INTELLIGENCE

#artificialintelligence

Music is the most popular art form that is performed and listened to by billions of people every day. There are many genres of music such as pop, classical, jazz, folk etc. Each genre has different music instruments, tone, rhythm, beats, flow etc. Digital music and online streaming have become very popular these days due to the increase in the number of users. To create a machine learning model, which classifies music samples into different genres.


Music Artist Classification with Convolutional Recurrent Neural Networks

arXiv.org Machine Learning

Abstract--Previous attempts at music artist classification use frame-level audio features which summarize frequency content within short intervals of time. Comparatively, more recent music information retrieval tasks take advantage of temporal structure in audio spectrograms using deep convolutional and recurrent models. This paper revisits artist classification with this new framework and empirically explores the impacts of incorporating temporal structure in the feature representation. To this end, an established classification architecture, a Convolutional Recurrent Neural Network (CRNN), is applied to the artist20 music artist identification dataset under a comprehensive set of conditions. These include audio clip length, which is a novel contribution in this work, and previously identified considerations such as dataset split and feature-level. Our results improve upon baseline works, verify the influence of the production details on classification performance and demonstrate the tradeoffs between sample length and training set size. The best performing model achieves an average F1-score of 0.937 across three independent trials which is a substantial improvement over the corresponding baseline under similar conditions. Finally, to showcase the effectiveness ofthe CRNN's feature extraction capabilities, we visualize audio samples at its bottleneck layer demonstrating that learned representations segment into clusters belonging to their respective artists. I. INTRODUCTION Music information retrieval (MIR) encompasses most audio analysis tasks such as genre classification, song identification, chord recognition, sound event detection, mood detection and feature extraction [1], [2].