Goto

Collaborating Authors

 musical instrument classification


Improving Musical Instrument Classification with Advanced Machine Learning Techniques

Chulev, Joanikij

arXiv.org Artificial Intelligence

Musical instrument classification, a key area in Music Information Retrieval, has gained considerable interest due to its applications in education, digital music production, and consumer media. Recent advances in machine learning, specifically deep learning, have enhanced the capability to identify and classify musical instruments from audio signals. This study applies various machine learning methods, including Naive Bayes, Support Vector Machines, Random Forests, Boosting techniques like AdaBoost and XGBoost, as well as deep learning models such as Convolutional Neural Networks and Artificial Neural Networks. The effectiveness of these methods is evaluated on the NSynth dataset, a large repository of annotated musical sounds. By comparing these approaches, the analysis aims to showcase the advantages and limitations of each method, providing guidance for developing more accurate and efficient classification systems. Additionally, hybrid model testing and discussion are included. This research aims to support further studies in instrument classification by proposing new approaches and future research directions.


Musical Instrument Classification via Low-Dimensional Feature Vectors

Zhao, Zishuo, Wang, Haoyun

arXiv.org Artificial Intelligence

Music is a mysterious language that conveys feeling and thoughts via different tones and timbre. For better understanding of timbre in music, we chose music data of 6 representative instruments, analysed their timbre features and classified them. Instead of the current trend of Neural Network for black-box classification, our project is based on a combination of MFCC and LPC, and augmented with a 6-dimensional feature vector designed by ourselves from observation and attempts. In our white-box model, we observed significant patterns of sound that distinguish different timbres, and discovered some connection between objective data and subjective senses. With a totally 32-dimensional feature vector and a naive all-pairs SVM, we achieved improved classification accuracy compared to a single tool. We also attempted to analyze music pieces downloaded from the Internet, found out different performance on different instruments, explored the reasons and suggested possible ways to improve the performance.


Clustering Spectral Filters for Extensible Feature Extraction in Musical Instrument Classification

Donnelly, Patrick (Montana State University) | Sheppard, John (Montana State University)

AAAI Conferences

We propose a technique of training models for feature extraction using prior expectation of regions of importance in an instrument's timbre. Over a dataset of training examples, we extract significant spectral peaks, calculate their ratio to fundamental frequency, and use $k$-means clustering to identify a set of windows of spectral prominence for each instrument. These windows are used to extract amplitude values from training data to use as features in classification tasks. We test this approach on two databases of 17 instruments, cross evaluate between datasets, and compare with MFCC features.