A new Google Magenta project (created by an intern!) lets you mix lo-fi, hip-hop music tracks to build a custom music room in your browser, with no musical ability required. Magenta is designed to use Google's machine learning systems for the creation of art and music, and the Lo-Fi Player is a fun example of what it can do. When you open Lo-Fi Player, you're taken to a pixellated virtual "room" where you click different objects -- a clock, a cat, or a piano, for instance-- in the room to change the different tracks, like the bass line and the melody. "The view outside the window relates to the background sound in the track, and you can change both the visual and the music by clicking on the window," Lo-Fi Player creator Vibert Thio wrote in a blog post. Thio writes that the team chose the format of a music-generating room rather than a composition tool or musical instrument because it's "a popular genre with a relatively simple music structure."
It would be illogical today to think that AI completely replaces human creativity. Having two such powerful "machines" and deleting one of them would be an absolute mistake. Instead, we should take advantage of 200% of the potential offered by both, an awesome combination impossible to replace. Let's talk about art, music, dance, writing, … "Being creative means being in love with life", being able to generate new ideas or concepts spontaneously. Does AI take place in these fields?
The Google Magenta team, which makes machine-learning tools for the creative process, has made models that help you compose melodies, and tools that help you sketch cats. Mostly because it's fun, but also to explore how AI can make creation more accessible. Its latest project now gives anyone a chance to make quarantine tunes to vibe to--no music training necessary. Lo-Fi Player, designed by Vibert Thio, a technologist and artist who interned with the team this summer, lets users interact with objects in a virtual room to mix their own lo-fi hip-hop soundtracks. The goal is to make the music-mixing experience as simple and friendly as possible.
Eck told the festival's music-savvy attendees about his team's new ideas about how to teach computers to help musicians write music--generate harmonies, create transitions in a song, and elaborate on a recurring theme. The format has come a long way since 1990s Geocities pages and games like Doom, thanks to better computing power, digital samplers, and recent movements like "Black MIDI," in which MIDI musicians like Lee saturate a digital musical score with so many notes, typically in the thousands or millions, that little white peers through. The state of the art in training computers is deep learning, artificial learning that uses neural networks, a method of storing information that loosely approximates the information processing of the brain and nervous system. In computer vision, where deep learning has become the standard machine learning technique, scientists know how a computer learns through a neural network when the computer knows what shapes to look for in an image.
In May, Google research scientist Douglas Eck left his Silicon Valley office to spend a few days at Moogfest, a gathering for music, art, and technology enthusiasts deep in North Carolina's Smoky Mountains. Eck told the festival's music-savvy attendees about his team's new ideas about how to teach computers to help musicians write music--generate harmonies, create transitions in a song, and elaborate on a recurring theme. Someday, the machine could learn to write a song all on its own. Eck hadn't come to the festival--which was inspired by the legendary creator of the Moog synthesizer and peopled with musicians and electronic music nerds--simply to introduce his team's challenging project. To "learn" how to create art and music, he and his colleagues need users to feed the machines tons of data, using MIDI, a format more often associated with dinky video game sounds than with complex machine learning.