Goto

Collaborating Authors

TempoExpress: An Expressivity-Preserving Musical Tempo Transformation System

AAAI Conferences

The research described in this paper focuses on global tempo transformations of monophonic audio recordings of saxophone jazz performances. More concretely, we have investigated the problem of how a performance played at a particular tempo can be automatically rendered at another tempo while preserving its expressivity. To do so we have developed a case-based reasoning system called TempoExpress. The results we have obtained have been extensively compared against a standard technique called uniform time stretching (UTS), and show that our approach is superior to UTS.


The next frontier for artificial intelligence? Learning humans' common sense ZDNet

AITopics Original Links

Nearly half a century has passed between the release of the films 2001: A Space Odyssey (1968) and Transcendence (2014), in which a quirky scientist's consciousness is uploaded into a computer. Despite being 50 years apart, their plots, however, are broadly similar. Science fiction stories continue to imagine the arrival of human-like machines that rebel against their creators and gain the upper hand in battle. In the field of artificial intelligence (AI) research, over the last 30 years, progress has been similarly slower than expected. While AI is increasingly part of our everyday lives - in our phones or cars - and computers process large amounts of data, they still lack human-level capacity to make deductions from the information they're given.


Preface

AAAI Conferences

THIS VOLUME contains the papers presented at the 20th International FLAIRS Conference (FLAIRS-20) held 7-9 May 2007, in Key West, Florida, USA. The call for papers attracted 182 paper submissions, 65 to the general conference and 117 to the 11 special tracks. Each paper was reviewed by at least three reviewers, coordinated by the program committees of the general conference and the special tracks. The program committees accepted the 125 papers that appear in these proceedings, 95 as presented papers (23 from the general conference and 72 from the special tracks) and 30 as poster papers (8 from the general conference and 22 from the special tracks). The best paper awards went to Joachim Baumeister, Thomas Kleemann, and Dietmar Seipel: "Towards the Verification of Ontologies with Rules" and Susan Fox: "Introductory AI for both Computer Science and Neuroscience Students."


2011 Robert S. Engelmore Memorial Lecture Award

AI Magazine

Following a brief overview discussing why people prefer listening to expressive music instead of nonexpressive synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AIrelated approaches. In the main part of the article we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on Tempo-Express, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting of complementing audio information with information about the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This article is based on the "2011 Robert S. Engelmore Memorial Lecture" given by the first author at AAAI/IAAI 2011.


Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

AI Magazine

This paper surveys significant research on the problem of rendering expressive music by means of AI techniques with an emphasis on Case-Based Reasoning. Following a brief overview discussing why we prefer listening to expressive music instead of lifeless synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AI-related approaches. In the main part of the paper we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on TempoExpress, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting on complementing audio information with information of the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This paper is based on the “2011 Robert S. Engelmore Memorial Lecture” given by the first author at AAAI/IAAI 2011.