If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Cleanly isolating vocals from drums, bass, piano, and other musical accompaniment is the dream of every mashup artist, karaoke fan, and producer. Commercial solutions exist, but can be expensive and unreliable. Techniques like phase cancellation have very mixed results. The engineering team behind streaming music service Deezer just open-sourced Spleeter, their audio separation library built on Python and TensorFlow that uses machine learning to quickly and freely separate music into stems. The team at @Deezer just released #Spleeter, a Python music source separation library with state-of-the-art pre-trained models!
Robots are usually associated with automation, science, and engineering. But can robots have other talents? If you play a melody to a robot, would it be able to comprehend it and come up with a musical response? Can it learn music, compose its own musical, compete with humans, and even surpass them? Could robots be the next Jonas Brothers, Imagine Dragons, or Mozart?
Part of this exhibit will feature illusions. Some can fool people, and some can fool machines. Without full context, our brains cannot always make sense of images. Machines can see illusions we can't, but they can't see things we can! For example: without ears and a tail, a Chihuahua can easily look like a blueberry muffin!
It is early July, almost 30C outside, but Mihkel Jäätma is thinking about Christmas. In a co-working space in Soho, the 39-year-old founder and CEO of Realeyes, an "emotion AI" startup which uses eye-tracking and facial expression to analyse mood, scrolls through a list of 20 festive ads from 2018. He settles on The Boy and the Piano, the offering from John Lewis that tells the life story of Elton John backwards, from megastardom to the gift of a piano from his parents as a child, accompanied by his timeless heartstring-puller Your Song. The ad was well received, but Jäätma is clearly unconvinced. He hits play, and the ad starts, but this time two lines – one grey (negative reactions), the other red (positive) – are traced across the action.
Research on style transfer and domain translation has clearly demonstrated the ability of deep learning-based algorithms to manipulate images in terms of artistic style. More recently, several attempts have been made to extend such approaches to music (both symbolic and audio) in order to enable transforming musical style in a similar manner. In this study, we focus on symbolic music with the goal of altering the 'style' of a piece while keeping its original 'content'. As opposed to the current methods, which are inherently restricted to be unsupervised due to the lack of 'aligned' data (i.e. the same musical piece played in multiple styles), we develop the first fully supervised algorithm for this task. At the core of our approach lies a synthetic data generation scheme which allows us to produce virtually unlimited amounts of aligned data, and hence avoid the above issue. In view of this data generation scheme, we propose an encoder-decoder model for translating symbolic music accompaniments between a number of different styles. Our experiments show that our models, although trained entirely on synthetic data, are capable of producing musically meaningful accompaniments even for real (non-synthetic) MIDI recordings.
We've created MuseNet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text. Since MuseNet knows many different styles, we can blend generations in novel ways. Here the model is given the first 6 notes of a Chopin Nocturne, but is asked to generate a piece in a pop style with piano, drums, bass, and guitar.
The creation and performance of music has inspired AI researchers since the very early times of artificial intelligence [8, 13, 10], and there is today a rich literature of computational approaches to music , including AI systems for music composition  and improvisation . As pointed out by Thom , however, these systems rarely focus on the spontanous interaction between the human and the artificial musicians. We claim that such interaction demands a combination of reactivity and anticipation, that is, the ability to act now based on a predictive model of the companion player . This paper reports our initial steps in the generation of collaborative human-machine music performance, as a special case of the more general problem of anticipation and creative processes in mixed human-robot, or anthrobotic systems . We consider a simple case study of a duo consisting of a human pianist accompained by an off-the-shelf virtual drummer, and we design an AI system to control the key perfomance parameters of the virtual drummer (patterns, intensity, complexity, fills, and so on) as a function of what the human pianist is playing. The AI system is knowledge-based: it relies on an internal model represented by means of fuzzy logic.
WASHINGTON - U.S. computer scientist Katie Bouman, who became a global sensation over her role in generating the world's first image of a black hole, has described the painstaking process as akin to listening to a piano with missing keys. Testifying before Congress on Thursday, the postdoctoral fellow at the Harvard Smithsonian Center for Astrophysics also suggested that the technology developed by the project could have practical applications in the fields of medical imaging, seismic prediction and self-driving cars. A photo released last month of the star-devouring monster in the heart of the Messier 87 (M87) galaxy revealed a dark core encircled by a flame-orange halo of white hot plasma. Because M87 is 55 million light-years away, "This ring appears incredibly small on the sky: roughly 40 microarcseconds in size, comparable to the size of an orange on the surface of the moon as viewed from our location on Earth," said Bouman. The laws of physics require a telescope the size of our entire planet to view it.
I started playing piano when I was five years old. I used to practice for about an hour every day and let me tell you, an hour felt like forever. I didn't stop thought, and I kept on practicing though, because I really liked music. Fast forward a few years and I started doing some really advanced stuff. My hands were literally flying all over the keyboard and I could play with my eyes closed.
A link has been posted to your Facebook feed. LAS VEGAS – The Google team is seemingly everywhere at CES 2019, both in signage at the main convention center, ("Hey Google"), a large booth presence and hundreds of people dressed in white "Hey Google" jumpsuits, topped off with matching "Hey Google" beanies. Arch-rival Amazon, on the other hand, has a small, understated ballroom at the lower trafficked Sands Convention Expo, showcasing a potpourri of products, from Amazon and other vendors, that use the Alexa voice commands. Staffers are adorned in blue Alexa sports shirts. On the eve of the show, the companies threw down the gauntlet: Amazon Echo speakers and third-party vendors using the system have sold over 100 million units, Amazon said.