Goto

Collaborating Authors

Results


AI and Music: From Composition to Expressive Performance

AI Magazine

In this article, we first survey the three major types of computer music systems based on AI techniques: (1) compositional, (2) improvisational, and (3) performance systems. Representative examples of each type are briefly described. Then, we look in more detail at the problem of endowing the resulting performances with the expressiveness that characterizes human-generated music. This is one of the most challenging aspects of computer music that has been addressed just recently. The main problem in modeling expressiveness is to grasp the performer's "touch," that is, the knowledge applied when performing a score.


Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

AI Magazine

Following a brief overview discussing why we prefer listening to expressive music instead of lifeless synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AI-related approaches. In the main part of the paper we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on TempoExpress, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting on complementing audio information with information of the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This paper is based on the "2011 Robert S. Engelmore Memorial Lecture" given by the first author at AAAI/IAAI 2011.


2011 Robert S. Engelmore Memorial Lecture Award

AI Magazine

Following a brief overview discussing why people prefer listening to expressive music instead of nonexpressive synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AIrelated approaches. In the main part of the article we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on Tempo-Express, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting of complementing audio information with information about the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This article is based on the "2011 Robert S. Engelmore Memorial Lecture" given by the first author at AAAI/IAAI 2011.


Josep Lluis Arcos

AITopics Original Links

Interested in the research on machine learning and time-series analysis algorithms able to process big data in an efficient, adaptive, and robust way. Currently focused on their application to Cognitive Stimulation and Rehabilitation (see Innobrain and Cognitio projects) and Autism Spectrum Disorders (see AMATE project). Another topic of my interest is the use of Machine Learning techniques to reason and learn about musical processes like expressive music generation. Currently focused on the study of musical expressivity in Nylon Guitars (see guitarLab) and social tools for music education (see PRAISE). We have studied the issue of expressiveness in the context of tenor saxophon interpretations (see Saxex and TempoExpress systems) in collaboration with the Music Technology Group (UPF).


A Survey of Artificial Intelligence Research at the IIIA

AI Magazine

The IIIA is a public research centre, belonging to the Spanish National Research Council (CSIC), dedicated to AI research. We focus our activities on a few well-defined sub-domains of Artificial Intelligence, positively avoiding dispersion and keeping a good balance between basic research and applications, and paying particular attention to training PhD students and technology transfer. In this article, we survey some of the most relevant results we have obtained during the last 12 years.


Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

AI Magazine

Following a brief overview discussing why we prefer listening to expressive music instead of lifeless synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AI-related approaches. In the main part of the paper we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on TempoExpress, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting on complementing audio information with information of the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings.


Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

AI Magazine

This paper surveys significant research on the problem of rendering expressive music by means of AI techniques with an emphasis on Case-Based Reasoning. Following a brief overview discussing why we prefer listening to expressive music instead of lifeless synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AI-related approaches. In the main part of the paper we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on TempoExpress, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting on complementing audio information with information of the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This paper is based on the “2011 Robert S. Engelmore Memorial Lecture” given by the first author at AAAI/IAAI 2011.


Integrating background musical knowledge in a CBR system for generating expressive musical performances

AAAI Conferences

This paper briefly describes a system called SaxEx, capable of generating expressive musical performances based on examples. We have done several recordings of a tenor sax expressively playing Jazz ballads. These recordings are analyzed, using spectral modelling techniques, to extract information related to five expressive parameters. The results of this analysis, together with the score, constitute the set of examples (cases) of the case-based component SaxEx. From these examples, plus background musical knowledge based on Narmour's implication/realization theory of musical perception and Lerdahl and Jackendoffs generative theory of tonal music understanding (GTTM), SaxEx is able to infer a set of expressive transformations to apply to any given input sound file containing an inexpressive musical phrase of another ballad.


Affect-Driven Generation of Expressive Musical Performances

AAAI Conferences

These theories of musical perception and musical understanding are the basis of the computational model of musical knowledge of the system. SaxEx is implemented in Noos (Arcos Plaza 1997; 1996), a reflective object-centered representation language designed to support knowledge modeling of problem solving and learning. In our previous work on SaxEx (Areos, L6pez de M ntaras, Serra 1998) we had not taken into account the possibility of exploiting the affective aspects of music to guide the retrieval step of the CBR process. In this paper, we discuss the introduction of labels of affective nature (such as "calm", "tender", "aggressive", etc.) as a declarative bias in the Identify and Search subtasks of the Retrieval task (see Figure 2). Background In this section, we briefly present some of the elements underlying SaxEx which are necessary to understand the system.


AI and Music: From Composition to Expressive Performance

AI Magazine

In this article, we first survey the three major types of computer music systems based on AI techniques: (1) compositional, (2) improvisational, and (3) performance systems. For this reason, previous approaches, based on following musical rules trying to capture interpretation knowledge, had serious limitations. An alternative approach, much closer to the observation-imitation process observed in humans, is that of directly using the interpretation knowledge implicit in examples extracted from recordings of human performers instead of trying to make explicit such knowledge. In the last part of the article, we report on a performance system, SAXEX, based on this alternative approach, that is capable of generating high-quality expressive solo performances of jazz ballads based on examples of human performers within a case-based reasoning (CBR) system.