Goto

Collaborating Authors

 deepcomposer


Generate a jazz rock track using AWS DeepComposer with machine learning

#artificialintelligence

At AWS, we love sharing our passion for technology and innovation, and AWS DeepComposer is no exception. This service is designed to help everyone learn about generative artificial intelligence (AI) through the language of music. You can use a sample melody, upload your own melody, or play a tune using the virtual or a real keyboard. Best of all, you don't have to write any code. But what exactly is generative AI, and why is it useful?


How to run an AI powered musical challenge: "AWS DeepComposer Got Talent"

#artificialintelligence

To help you fast track your company's adoption of machine learning (ML), AWS offers educational solutions for developers to get hands-on experience. We like to think of these programs as a fun way for developers to build their skills using ML technologies in real world scenarios. In this post, we walk you through how to prepare for and run an AI music competition using AWS DeepComposer. Through AWS DeepComposer, you can experience Generative AI in action and learn how to harness the latest in ML and AI. We provide an end-to-end kit that contains tools, techniques, processes, and best practices to run the event. Designed specifically to educate developers on generative AI, AWS DeepComposer includes tutorials, sample code, and training data in an immersive platform that can be used to build ML models with music as the medium of instruction. Developers, regardless of their background in ML or music, can get started with applying AI techniques including Generative Adversarial Networks (GANs), Autoregressive Convolutional Neural Networks (AR-CNN) and Transformers to generate new musical notes and accompaniments.


Collaborating with AI to create Bach-like compositions in AWS DeepComposer

#artificialintelligence

AWS DeepComposer provides a creative and hands-on experience for learning generative AI and machine learning (ML). We recently launched the Edit melody feature, which allows you to add, remove, or edit specific notes, giving you full control of the pitch, length, and timing for each note. In this post, you can learn to use the Edit melody feature to collaborate with the autoregressive convolutional neural network (AR-CNN) algorithm and create interesting Bach-style compositions. Through human-AI collaboration, we can surpass what humans and AI systems can create independently. For example, you can seek inspiration from AI to create art or music outside their area of expertise or offload the more routine tasks, like creating variations on a melody, and focus on the more interesting and creative tasks.


Researchers Create AI Model Capable Of Singing In Both Chinese and English

#artificialintelligence

A team of researchers from Microsoft and Zhajiang University have recently created an AI model capable of singing in numerous languages. As VentureBeat reported, the DeepSinger AI developed by the team was trained on data from various music websites, using algorithms that captured the timbre of the singer's voice. Generating the "voice" of an AI singer requires algorithms that are capable of predicting and controlling both the pitch and duration of audio. When people sing, the noises they produce have vastly more complex rhythms and patterns compared to simple speech. Another problem for the team to overcome was that while there is a fair amount of speaking/speech training data available, singing training data sets are fairly rare.


Generating compositions in the style of Bach using the AR-CNN algorithm in AWS DeepComposer

#artificialintelligence

AWS DeepComposer gives you a creative way to get started with machine learning (ML) and generative AI techniques. AWS DeepComposer recently launched a new generative AI algorithm called autoregressive convolutional neural network (AR-CNN), which allows you to generate music in the style of Bach. In this blog post, we show a few examples of how you can use the AR-CNN algorithm to generate interesting compositions in the style of Bach and explain how the algorithm's parameters impact the characteristics of the generated composition. The AR-CNN algorithm provided in the AWS DeepComposer console offers a variety of parameters to generate unique compositions, such as the number of iterations and the maximum number of notes to add to or remove from the input melody to generate unique compositions. The parameter values will directly impact the extent to which you modify the input melody.


Creating a music genre model with your own data in AWS DeepComposer Amazon Web Services

#artificialintelligence

AWS DeepComposer is an educational AWS service that teaches generative AI and uses Generative Adversarial Networks (GANs) to transform a melody that you provide into a completely original song. With AWS DeepComposer, you can use one of the pre-trained music genre models (such as Jazz, Rock, Pop, Symphony, or Jonathan-Coulton) or train your own. As a part of training your custom music genre model, you store your music data files in NumPy objects. This post accompanies the training steps in Lab 2 – Train a custom GAN model on GitHub and demonstrates how to convert your MIDI files to the proper training format for AWS DeepComposer. For this use case, you use your own MIDI files to train a Reggae music genre model.


Amazon's new "AI keyboard" is confusing everyone

#artificialintelligence

Amazon Web Services debuted a keyboard called DeepComposer this week, claiming it's "the world's first musical keyboard powered by generative AI." It has 32 keys, costs $99, and connects to a software interface that uses machine learning and cloud computing to generate music based on what you play. It's been unclear who this is for, and many have latched on to the fact that the music it creates just sounds bad. It looks like a consumer product, and Amazon used an over-the-top presentation to hype it, which included what AWS claimed was "the first hybrid AI human pop acoustic collaboration." But actually, the keyboard is intended to be a beginning tool for developers to get into machine learning and music.


Top 10 Machine Learning Announcements From AWS re:Invent

#artificialintelligence

The AWS re:Invent conference announced numerous tools and services for developers in 2019. This year, the developers at AWS paid special attention to machine learning development. In this article, we list down the top announcements on machine learning services at AWS re:Invent 2019. Amazon Augmented AI or A2I provides built-in human review workflows for common machine learning use cases, such as content moderation and text extraction from documents, which allows predictions from Amazon Rekognition and Amazon Textract to be reviewed easily. This feature makes it easy for building and managing human reviews for machine learning applications.


Why People Are So Overwhelmed by AWS Latest Musical Keyboard Powered By Generative AI

#artificialintelligence

As much as a programmer likes machine learning, there must come a time when they are overwhelmed by the study process. All the coding, maths and infrastructure of it might make one reach out for that extra cup of coffee. Now, e-commerce giant Amazon has made the world of generative artificial intelligence a little easier to understand by introducing its machine learning-powered MIDI-compatible keyboard, DeepComposer. AWS DeepComposer is a 32-key, 2-octave keyboard design. This ML keyboard offers developers to experience generative AI in a better way.