Goto

Collaborating Authors

 composer


ChatGPT Needs More Cowbell

The Atlantic - Technology

AI struggles to write a good jingle. You'd be forgiven if you can't hum the 18th-century Cumbrian folk song "Do Ye Ken John Peel." But in 1942, a version of that tune, reworked with lyrics about Pepsi-Cola, was the most recognized melody in America. Three years earlier, two men walked into the office of Pepsi-Cola's president, carrying a phonograph. They played a demo of what would become one of America's earliest advertising jingles.


Automated Composition of Agents: A Knapsack Approach for Agentic Component Selection

Yuan, Michelle, Pahwa, Khushbu, Chang, Shuaichen, Kaba, Mustafa, Jiang, Jiarong, Ma, Xiaofei, Zhang, Yi, Sunkara, Monica

arXiv.org Artificial Intelligence

Designing effective agentic systems requires the seamless composition and integration of agents, tools, and models within dynamic and uncertain environments. Most existing methods rely on static, semantic retrieval approaches for tool or agent discovery. However, effective reuse and composition of existing components remain challenging due to incomplete capability descriptions and the limitations of retrieval methods. Component selection suffers because the decisions are not based on capability, cost, and real-time utility. To address these challenges, we introduce a structured, automated framework for agentic system composition that is inspired by the knapsack problem. Our framework enables a composer agent to systematically identify, select, and assemble an optimal set of agentic components by jointly considering performance, budget constraints, and compatibility. By dynamically testing candidate components and modeling their utility in real-time, our approach streamlines the assembly of agentic systems and facilitates scalable reuse of resources. Empirical evaluation with Claude 3.5 Sonnet across five benchmarking datasets shows that our online-knapsack-based composer consistently lies on the Pareto frontier, achieving higher success rates at significantly lower component costs compared to our baselines. In the single-agent setup, the online knapsack composer shows a success rate improvement of up to 31.6% in comparison to the retrieval baselines. In multi-agent systems, the online knapsack composer increases success rate from 37% to 87% when agents are selected from an agent inventory of 100+ agents. The substantial performance gap confirms the robust adaptability of our method across diverse domains and budget constraints.


MusicAIR: A Multimodal AI Music Generation Framework Powered by an Algorithm-Driven Core

Liao, Callie C., Liao, Duoduo, Zhang, Ellie L.

arXiv.org Artificial Intelligence

Recent advances in generative AI have made music generation a prominent research focus. However, many neural-based models rely on large datasets, raising concerns about copyright infringement and high-performance costs. In contrast, we propose MusicAIR, an innovative multimodal AI music generation framework powered by a novel algorithm-driven symbolic music core, effectively mitigating copyright infringement risks. The music core algorithms connect critical lyrical and rhythmic information to automatically derive musical features, creating a complete, coherent melodic score solely from the lyrics. The MusicAIR framework facilitates music generation from lyrics, text, and images. The generated score adheres to established principles of music theory, lyrical structure, and rhythmic conventions. We developed Generate AI Music (GenAIM), a web tool using MusicAIR for lyric-to-song, text-to-music, and image-to-music generation. In our experiments, we evaluated AI-generated music scores produced by the system using both standard music metrics and innovative analysis that compares these compositions with original works. The system achieves an average key confidence of 85%, outperforming human composers at 79%, and aligns closely with established music theory standards, demonstrating its ability to generate diverse, human-like compositions. As a co-pilot tool, GenAIM can serve as a reliable music composition assistant and a possible educational composition tutor while simultaneously lowering the entry barrier for all aspiring musicians, which is innovative and significantly contributes to AI for music generation.



ComMU: Dataset for Combinatorial Music Generation

Neural Information Processing Systems

Combinatorial music generation creates short samples of music with rich musical metadata, and combines them to produce a complete music. In addition, we introduce ComMU, the first symbolic music dataset consisting of short music samples and their corresponding 12 musical metadata for combinatorial music generation .




'I'm a composer. Am I staring extinction in the face?': classical music and AI

The Guardian

Riding a wave means surrendering to the pull. Riding a wave means surrendering to the pull. Technology is radically reshaping how we make music. As I dug deeper into this for a radio 3 documentary I began to wonder if creative organisations are right to be so upbeat about AI. Are we riding the wave or will the wave destroy us?


Supporting Creative Ownership through Deep Learning-Based Music Variation

Krol, Stephen James, Llano, Maria Teresa, McCormack, Jon

arXiv.org Artificial Intelligence

This paper investigates the importance of personal ownership in musical AI design, examining how practising musicians can maintain creative control over the compositional process. Through a four-week ecological evaluation, we examined how a music variation tool, reliant on the skill of musicians, functioned within a composition setting. Our findings demonstrate that the dependence of the tool on the musician's ability, to provide a strong initial musical input and to turn moments into complete musical ideas, promoted ownership of both the process and artefact. Qualitative interviews further revealed the importance of this personal ownership, highlighting tensions between technological capability and artistic identity. These findings provide insight into how musical AI can support rather than replace human creativity, highlighting the importance of designing tools that preserve the humanness of musical expression.


Composer: A Search Framework for Hybrid Neural Architecture Design

Acun, Bilge, Sinha, Prasoon, Ardalani, Newsha, Bae, Sangmin, Golden, Alicia, Lin, Chien-Yu, Madhyastha, Meghana, Sun, Fei, Yadwadkar, Neeraja J., Wu, Carole-Jean

arXiv.org Artificial Intelligence

Hybrid model architectures that combine computational primitives (e.g., Attention, MLP) in different ratios have shown promising performance beyond Transformers. Some studies have shown that different interleavings of primitives can affect model quality as well. However, prior works explore the hybrid model architecture design space manually. Due to the large design space and training costs, discovering hybrid models that combine key computational primitives for pre-training is challenging. In this work, we take a principled approach in designing a modular hybrid model architecture search framework -- Composer. Composer explores model architectures at a small scale and extrapolates the top-performing model architectures to a larger scale using our proposed scaling strategies. Using Composer, we discover new hybrid LLM architectures that outperform Llama 3.2. Compared to Llama 3.2 and previous state-of-the-art baselines, the new model architectures consistently reduce validation loss at parameter scales of 350M-3B and improve evaluation accuracy on the downstream tasks by up to 2.8-8.3% (1.1-3.1% on average) while improving both training and inference efficiency.