Conformity bias in the cultural transmission of music sampling traditions

arXiv.org Machine Learning

One of the fundamental questions of cultural evolutionary research is how individual-level processes scale up to generate population-level patterns. Previous studies in music have revealed that frequency-based bias (e.g. conformity and novelty) drives large-scale cultural diversity in different ways across domains and levels of analysis. Music sampling is an ideal research model for this process because samples are known to be culturally transmitted between collaborating artists, and sampling events are reliably documented in online databases. The aim of the current study was to determine whether frequency-based bias has played a role in the cultural transmission of music sampling traditions, using a longitudinal dataset of sampling events across three decades. Firstly, we assessed whether turn-over rates of popular samples differ from those expected under neutral evolution. Next, we used agent-based simulations in an approximate Bayesian computation framework to infer what level of frequency-based bias likely generated the observed data. Despite anecdotal evidence of novelty bias, we found that sampling patterns at the population-level are most consistent with conformity bias.


A Very Short History Of Artificial Intelligence (AI)

#artificialintelligence

By using this "Contrivance," "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study." Bayesian inference will become a leading approach in machine learning. The boat was equipped with, as Tesla described, "a borrowed mind." The word "robot" comes from the word "robota" (work). It features a robot double of a peasant girl, Maria, which unleashes chaos in Berlin of 2026--it was the first robot depicted on film, inspiring the Art Deco look of C-3PO in Star Wars.


A Bayesian Network for Real-Time Musical Accompaniment

Neural Information Processing Systems

We describe a computer system that provides a real-time musical accompanimentfor a live soloist in a piece of non-improvised music for soloist and accompaniment. A Bayesian network is developed thatrepresents the joint distribution on the times at which the solo and accompaniment notes are played, relating the two parts through a layer of hidden variables. The network is first constructed usingthe rhythmic information contained in the musical score. The network is then trained to capture the musical interpretations ofthe soloist and accompanist in an off-line rehearsal phase. During live accompaniment the learned distribution of the network is combined with a real-time analysis of the soloist's acoustic signal, performedwith a hidden Markov model, to generate a musically principledaccompaniment that respects all available sources of knowledge. A live demonstration will be provided.


Time-Aware Latent Concept Expansion for Microblog Search

AAAI Conferences

Incorporating the temporal property of words into query expansion methods based on relevance feedback has been shown to have a significant positive effect on microblog search.In contrast to such word-based query expansion methods, we propose a concept-based query expansion method based on a temporal relevance model that uses the temporal variation of concepts (e.g., terms and phrases) on microblogs. Our model naturally extends an extremely effective existing concept-based relevance model by tracking the concept frequency over time.Moreover, the proposed model produces important concepts that are frequently used within a particular time periodassociated with a given topic, which better discriminate between relevant and non-relevant microblog documents than words.Our experiments using a corpus of microblog data (Tweets2011 corpus) show that the proposed concept-based query expansion method improves search performance significantly, especially for highly relevant documents.


Probabilistic Interactive Installations

AAAI Conferences

We present a description of two small audio/visual immersive installations. The main framework is an interactive structure that enables multiple participants to generate jazz improvisations, loosely speaking. The first uses a Bayesian Network to respond to sung or played pitches with machine pitches, in a kind of constrained harmonic way. The second uses Bayesian Networks and Hidden Markov Models to track human motion, play reactive chords, and to respond to pitches both aurally and visually.