Supervised Symbolic Music Style Translation Using Synthetic Data

arXiv.org Machine Learning

Research on style transfer and domain translation has clearly demonstrated the ability of deep learning-based algorithms to manipulate images in terms of artistic style. More recently, several attempts have been made to extend such approaches to music (both symbolic and audio) in order to enable transforming musical style in a similar manner. In this study, we focus on symbolic music with the goal of altering the 'style' of a piece while keeping its original 'content'. As opposed to the current methods, which are inherently restricted to be unsupervised due to the lack of 'aligned' data (i.e. the same musical piece played in multiple styles), we develop the first fully supervised algorithm for this task. At the core of our approach lies a synthetic data generation scheme which allows us to produce virtually unlimited amounts of aligned data, and hence avoid the above issue. In view of this data generation scheme, we propose an encoder-decoder model for translating symbolic music accompaniments between a number of different styles. Our experiments show that our models, although trained entirely on synthetic data, are capable of producing musically meaningful accompaniments even for real (non-synthetic) MIDI recordings.


Liang

AAAI Conferences

Bagging is a simple yet effective design which combines multiple single learners to form an ensemble for prediction. Despite its popular usage in many real-world applications, existing research is mainly concerned with studying unstable learners as the key to ensure the performance gain of a bagging predictor, with many key factors remaining unclear. For example, it is not clear when a bagging predictor can outperform a single learner and what is the expected performance gain when different learning algorithms were used to form a bagging predictor. In this paper, we carry out comprehensive empirical studies to evaluate bagging predictors by using 12 different learning algorithms and 48 benchmark data-sets. Our analysis uses robustness and stability decompositions to characterize different learning algorithms, through which we rank all learning algorithms and comparatively study their bagging predictors to draw conclusions. Our studies assert that both stability and robustness are key requirements to ensure the high performance for building a bagging predictor. In addition, our studies demonstrated that bagging is statistically superior to most single base learners, except for KNN and Naïve Bayes (NB). Multi-layer perception (MLP), Naïve Bayes Trees (NBTree), and PART are the learning algorithms with the best bagging performance.


Villanova, Michigan Coach Different Styles for NCAA Title

U.S. News

"You hang in there and you just do your absolute best every single day. And someday you're going to say, I gave it everything I had, and if I'm falling into my grave, that's OK too," Beilein said. But you just do everything you can to be the best coach, the best mentor, the best teacher, the best husband, the grandfather, father every day, and you go do it again. And that's all I want to be."


This smart mirror uses AR to let you 'try on' different hair styles

Engadget

We've seen a slew of smart mirrors get introduced over the past few years, including one from Panasonic that's designed to analyze your skin. But for CareOS, a company based out of Europe, it wants to make an entire connected platform for the home and beauty salons out of its Artemis smart mirror. The mirror uses augmented reality to do things like "try on" a variety of different hair colors on your, which would come in handy before you decide to get a makeover. It can also integrate with brands to let you buy facial creme, as well as show you video tutorials on how to apply the makeup you're buying. Aside from facial recognition, the Artemis smart mirror features voice commands, a touchless user interface to keep it from getting dirty and 4D Visualization that allows it to take 3D captures of your face.