Goto

Collaborating Authors

How 'creative AI' can change the future of music for everyone

#artificialintelligence

Do you think you can tell a piece of music composed by artificial intelligence (AI) from one created by a human composer? Before you read any further, let's find out. The following audio consists of two fragments, one written by AI, the other by a human. We're inviting 250 to exhibit at TNW Conference and pitch on stage! If you didn't get it right the first time, no worries--we'll have a couple more mini-quizzes like this below.


How 'creative AI' can change the future of music for everyone

#artificialintelligence

Do you think you can tell a piece of music composed by artificial intelligence (AI) from one created by a human composer? Before you read any further, let's find out. The following audio consists of two fragments, one written by AI, the other by a human. Last year, Facebook's VP of Design thought the TNW Conference main stage was the best she'd ever been on. If you didn't get it right the first time, no worries--we'll have a couple more mini-quizzes like this below.


How "creative AI" can change the future of music for everyone

#artificialintelligence

Do you think you can tell a piece of music composed by artificial intelligence (AI) from one created by a human composer? Before you read any further, let's find out. The following audio consists of two fragments, one written by AI, the other by a human. If you didn't get it right the first time, no worries--we'll have a couple more mini-quizzes like this below. The AI that wrote the fragment above has been programmed by Jukedeck, a UK-based startup working on machine-made music that won the competition at TechCrunch Disrupt London in 2015.


Generating Music Medleys via Playing Music Puzzle Games

AAAI Conferences

Generating music medleys is about finding an optimal permutation of a given set of music clips. Toward this goal, we propose a self-supervised learning task, called the music puzzle game, to train neural network models to learn the sequential patterns in music. In essence, such a game requires machines to correctly sort a few multisecond music fragments. In the training stage, we learn the model by sampling multiple non-overlapping fragment pairs from the same songs and seeking to predict whether a given pair is consecutive and is in the correct chronological order. For testing, we design a number of puzzle games with different difficulty levels, the most difficult one being music medley, which requiring sorting fragments from different songs. On the basis of state-of-the-art Siamese convolutional network, we propose an improved architecture that learns to embed frame-level similarity scores computed from the input fragment pairs to a common space, where fragment pairs in the correct order can be more easily identified. Our result shows that the resulting model, dubbed as the similarity embedding network (SEN), performs better than competing models across different games, including music jigsaw puzzle, music sequencing, and music medley. Example results can be found at our project website, https://remyhuang.github.io/DJnet.


Generating Music Medleys via Playing Music Puzzle Games

arXiv.org Machine Learning

Generating music medleys is about finding an optimal permutation of a given set of music clips. Toward this goal, we propose a self-supervised learning task, called the music puzzle game, to train neural network models to learn the sequential patterns in music. In essence, such a game requires machines to correctly sort a few multisecond music fragments. In the training stage, we learn the model by sampling multiple non-overlapping fragment pairs from the same songs and seeking to predict whether a given pair is consecutive and is in the correct chronological order. For testing, we design a number of puzzle games with different difficulty levels, the most difficult one being music medley, which requiring sorting fragments from different songs. On the basis of state-of-the-art Siamese convolutional network, we propose an improved architecture that learns to embed frame-level similarity scores computed from the input fragment pairs to a common space, where fragment pairs in the correct order can be more easily identified. Our result shows that the resulting model, dubbed as the similarity embedding network (SEN), performs better than competing models across different games, including music jigsaw puzzle, music sequencing, and music medley. Example results can be found at our project website, https://remyhuang.github.io/DJnet.