musicvae
Transfer Learning for Underrepresented Music Generation
Doosti, Anahita, Guzdial, Matthew
Combinational creativity, also sometimes combinatorial network models for music generation have arisen, trained creativity, is a type of creative problem solving on massive datasets and requiring significant computation in which two conceptual spaces are combined to represent (Civit et al. 2022). While these approaches have proven a third or new conceptual space (Boden 2009). While different successful at replicating genres of music like those in their musical genres may vary in terms of their local features training sets, due to the nature of large-scale neural network (e.g., melodies), they are all still music. As such, we models we expect this may not prove true for dissimilar genres.
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > India (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Flat Latent Manifolds for Human-machine Co-creation of Music
Chen, Nutan, Benbouzid, Djalel, Ferroni, Francesco, Nitschke, Mathis, Pinna, Luciano, van der Smagt, Patrick
The use of machine learning in artistic music generation leads to controversial discussions of the quality of art, for which objective quantification is nonsensical. We therefore consider a music-generating algorithm as a counterpart to a human musician, in a setting where reciprocal interplay is to lead to new experiences, both for the musician and the audience. To obtain this behaviour, we resort to the framework of recurrent Variational Auto-Encoders (VAE) and learn to generate music, seeded by a human musician. In the learned model, we generate novel musical sequences by interpolation in latent space. Standard VAEs however do not guarantee any form of smoothness in their latent representation. This translates into abrupt changes in the generated music sequences. To overcome these limitations, we regularise the decoder and endow the latent space with a flat Riemannian manifold, i.e., a manifold that is isometric to the Euclidean space. As a result, linearly interpolating in the latent space yields realistic and smooth musical changes that fit the type of machine--musician interactions we aim for. We provide empirical evidence for our method via a set of experiments on music datasets and we deploy our model for an interactive jam session with a professional drummer. The live performance provides qualitative evidence that the latent representation can be intuitively interpreted and exploited by the drummer to drive the interplay. Beyond the musical application, our approach showcases an instance of human-centred design of machine-learning models, driven by interpretability and the interaction with the end user.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Miles Brundage on AI Misuse and Trustworthy AI
In episode 17 of The Gradient Podcast, we talk to Miles Brundage, Head of Policy Research at OpenAI and a researcher passionate about the responsible governance of artificial intelligence. Miles is a researcher and research manager, and is passionate about the responsible governance of artificial intelligence. In 2018, he joined OpenAI, where he began as a Research Scientist and recently became Head of Policy Research. Before that, he was a Research Fellow at the University of Oxford's Future of Humanity Institute, where he is still a Research Affiliate).He also serves as a member of Axon's AI and Policing Technology Ethics Board. He completed a PhD in Human and Social Dimensions of Science and Technology from Arizona State University in 2019.
- North America > United States > Arizona (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.27)
What can AI do for the Music Industry?
Music artists, composers and producers today swim in massive amounts of musical notes to test the barriers of what melodies, harmonies and symphonies they can create and what works best with their songs. Although the advances in technology have significantly simplified and streamlined the process, it is still a long and challenging one for everyone involved in music creation. However, a technological revolution may be about to chance music creation as we know it. A team of computer scientists were able to use AI to complete the unfinished 10th symphony, originally created over 250 years ago by Ludwig Van Beethoven. This project has provoked interesting discussions, such as whether the now completed symphony is what Beethoven was originally trying to create, and also raised the important question -- what can Artificial Intelligence (AI) and Machine learning (ML) do for music production in the music entertainment industry? The team at Brainpool have been pondering on the answer to the latter, so we took the time to test a few of the various readily available AI music demos and reflected on how they could help transform the music industry.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
New AI By Google Allows You To Create Music On Browser
Recently, developers from Google's Magenta introduced a virtual room in the browser known as Lo-Fi player that lets you play with various musical beats of instruments. Lo-Fi is basically a music generating tool which allows you to select and create music of your choice. In a blog post, the developers of this AI system stated that if anyone has ever listened to the popular Lo-Fi Hip Hop streams while working and at the same time imagined if they were the producer, it will now allow them to create their own music and vibe. The developers chose the Lo-Fi Hip Hop because it's a popular genre where the structure of the music is relatively simple. According to them, this limited flexibility assisted in ensuring that the music always makes some sense.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Demos
A primary goal of the Magenta project is to demonstrate that machine learning can be used to enable and enhance the creative potential of all people. The demos and apps listed on this page illustrate the work of many people--both inside and outside of Google--to build fun toys, creative applications, research notebooks, and professional-grade tools that will benefit a wide range of users. This section includes hosted browser-based applications, many of which are implemented with TensorFlow.js for WebGL-accelerated inference. An interactive demo by Google Creative Lab based on MusicVAE using the MusicVAE.js You can use it to generate two dimensional palettes of drum beats and draw paths through the latent space to create evolving beats.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Asia > Vietnam > Hanoi > Hanoi (0.05)
- Media > Music (0.30)
- Leisure & Entertainment (0.30)
MusicVAE: Creating a palette for musical scores with machine learning.
When a painter creates a work of art, she first blends and explores color options on an artist's palette before applying them to the canvas. This process is a creative act in its own right and has a profound effect on the final work. Musicians and composers have mostly lacked a similar device for exploring and mixing musical ideas, but we are hoping to change that. Below we introduce MusicVAE, a machine learning model that lets us create palettes for blending and exploring musical scores. As an example, listen to this gradual blending of 2 different melodies, A and B. We'll explain how this morph was achieved throughout the post.