lyric line
Computational Modeling of Artistic Inspiration: A Framework for Predicting Aesthetic Preferences in Lyrical Lines Using Linguistic and Stylistic Features
Sahu, Gaurav, Vechtomova, Olga
Artistic inspiration remains one of the least understood aspects of the creative process. It plays a crucial role in producing works that resonate deeply with audiences, but the complexity and unpredictability of aesthetic stimuli that evoke inspiration have eluded systematic study. This work proposes a novel framework for computationally modeling artistic preferences in different individuals through key linguistic and stylistic properties, with a focus on lyrical content. In addition to the framework, we introduce \textit{EvocativeLines}, a dataset of annotated lyric lines, categorized as either "inspiring" or "not inspiring," to facilitate the evaluation of our framework across diverse preference profiles. Our computational model leverages the proposed linguistic and poetic features and applies a calibration network on top of it to accurately forecast artistic preferences among different creative individuals. Our experiments demonstrate that our framework outperforms an out-of-the-box LLaMA-3-70b, a state-of-the-art open-source language model, by nearly 18 points. Overall, this work contributes an interpretable and flexible framework that can be adapted to analyze any type of artistic preferences that are inherently subjective across a wide spectrum of skill levels.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New York (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (2 more...)
- Leisure & Entertainment (0.67)
- Media > Music (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
ELMI: Interactive and Intelligent Sign Language Translation of Lyrics for Song Signing
Yoo, Suhyeon, Truong, Khai N., Kim, Young-Ho
d/Deaf and hearing song-signers become prevalent on video-sharing platforms, but translating songs into sign language remains cumbersome and inaccessible. Our formative study revealed the challenges song-signers face, including semantic, syntactic, expressive, and rhythmic considerations in translations. We present ELMI, an accessible song-signing tool that assists in translating lyrics into sign language. ELMI enables users to edit glosses line-by-line, with real-time synced lyric highlighting and music video snippets. Users can also chat with a large language model-driven AI to discuss meaning, glossing, emoting, and timing. Through an exploratory study with 13 song-signers, we examined how ELMI facilitates their workflows and how song-signers leverage and receive an LLM-driven chat for translation. Participants successfully adopted ELMI to song-signing, with active discussions on the fly. They also reported improved confidence and independence in their translations, finding ELMI encouraging, constructive, and informative. We discuss design implications for leveraging LLMs in culturally sensitive song-signing translations.
- North America > Canada > Ontario > Toronto (0.46)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > District of Columbia > Washington (0.05)
- (5 more...)
- Research Report (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.46)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Education > Curriculum > Subject-Specific Education (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
AI system for live music accompaniment and improvisation
Electronic music artists and sound designers create complex and unique audio paths by patching modular and standalone synthesizers to achieve a desired sound effect. The process is inherently unpredictable, resulting in sounds that are unique and impossible to replicate, especially in complex audio paths. Electronic artists also often have an organic approach to composition, where they may start with an open mind and find inspiration by playing their instruments. This is in part due to the experimental nature of many electronic instruments and the importance of novel sound effects and textures (e.g., noise and drone) in electronic and electro-acoustic compositions. As a result of workflows associated with electronic music composition and production, artists can accumulate thousands of hours of studio recordings.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
LyricJam Sonic: A Generative System for Real-Time Composition and Musical Improvisation
Vechtomova, Olga, Sahu, Gaurav
Electronic music artists and sound designers have unique workflow practices that necessitate specialized approaches for developing music information retrieval and creativity support tools. Furthermore, electronic music instruments, such as modular synthesizers, have near-infinite possibilities for sound creation and can be combined to create unique and complex audio paths. The process of discovering interesting sounds is often serendipitous and impossible to replicate. For this reason, many musicians in electronic genres record audio output at all times while they work in the studio. Subsequently, it is difficult for artists to rediscover audio segments that might be suitable for use in their compositions from thousands of hours of recordings. In this paper, we describe LyricJam Sonic -- a novel creative tool for musicians to rediscover their previous recordings, re-contextualize them with other recordings, and create original live music compositions in real-time. A bi-modal AI-driven approach uses generated lyric lines to find matching audio clips from the artist's past studio recordings, and uses them to generate new lyric lines, which in turn are used to find other clips, thus creating a continuous and evolving stream of music and lyrics. The intent is to keep the artists in a state of creative flow conducive to music creation rather than taking them into an analytical/critical state of deliberately searching for past audio segments. The system can run in either a fully autonomous mode without user input, or in a live performance mode, where the artist plays live music, while the system "listens" and creates a continuous stream of music and lyrics in response.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- (3 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Researchers develop real-time lyric generation technology to inspire song writing
Music artists can find inspiration and new creative directions for their song writing with technology developed by Waterloo researchers. LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music, was created by members of the University's Natural Language Processing Lab. The lab, led by Olga Vechtomova, a Waterloo Engineering professor cross-appointed in Computer Science, has been researching creative applications of AI for several years. The lab's initial work led to the creation of a system that learns musical expressions of artists and generates lyrics in their style. Recently, Vechtomova, along with Waterloo graduate students Gaurav Sahu and Dhruv Kumar, developed technology that relies on various aspects of music such as chord progressions, tempo and instrumentation to synthesize lyrics reflecting the mood and emotions expressed by live music.
- Media > Music (0.94)
- Leisure & Entertainment (0.94)
Having treble with songwriting? AI lyric generation is here to help
LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music, was created by members of the University of Waterloo's Natural Language Processing Lab. The lab, led by Olga Vechtomova, a Waterloo Engineering Professor cross-appointed in Computer Science, has been researching creative applications of AI for several years. The lab's initial work led to the creation of a system that learns musical expressions of artists and generates lyrics in their style. Recently, Vechtomova, along with Waterloo graduate students Gaurav Sahu and Dhruv Kumar, developed technology that relies on various aspects of music such as chord progressions, tempo and instrumentation to synthesise lyrics, reflecting the mood and emotions expressed by live music. As a musician or a band plays instrumental music, the system continuously receives the raw audio clips, which the neural network processes to generate new lyric lines.
- Media > Music (0.95)
- Leisure & Entertainment (0.95)
AI Delivers a New Creative Direction for the Musicians
LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music, was created by members of the University's Natural Language Processing Lab. The lab, led by Olga Vechtomova, a Waterloo Engineering professor cross-appointed in Computer Science, has been researching creative applications of AI for several years. The lab's initial work led to the creation of a system that learns musical expressions of artists and generates lyrics in their style. Recently, Vechtomova, along with Waterloo graduate students Gaurav Sahu and Dhruv Kumar, developed technology that relies on various aspects of music such as chord progressions, tempo, and instrumentation to synthesize lyrics reflecting the mood and emotions expressed by live music. As a musician or a band plays instrumental music, the system continuously receives the raw audio clips, which the neural network processes to generate new lyric lines.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
LyricJam: A system for generating lyrics for live instrumental music
Vechtomova, Olga, Sahu, Gaurav, Kumar, Dhruv
We describe a real-time system that receives a live audio stream from a jam session and generates lyric lines that are congruent with the live music being played. Two novel approaches are proposed to align the learned latent spaces of audio and text representations that allow the system to generate novel lyric lines matching live instrumental music. One approach is based on adversarial alignment of latent representations of audio and lyrics, while the other approach learns to transfer the topology from the music latent space to the lyric latent space. A user study with music artists using the system showed that the system was useful not only in lyric composition, but also encouraged the artists to improvise and find new musical expressions. Another user study demonstrated that users preferred the lines generated using the proposed methods to the lines generated by a baseline model.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada > Alberta > Census Division No. 15 > Improvement District No. 9 > Banff (0.04)
- (2 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
This Neural Network Can Generate Lyrics Just Like Your Favourite Artiste
Artificial Intelligence is the notion of machines that exhibit intelligence and mimic cognitive functions that are usually associated with humans such as learning, reasoning, predicting, planning, recognising, and even problem-solving. Now, AI tools are being increasingly integrated into technological solutions and now it can even be induced in to generate lyrics that match the style of unique music artists. Now, the researchers at the University of Waterloo, Canada, have recently achieved this feat by developing a system that can generate such song lyrics. According to Olga Vechtomova, University of Waterloo, one of the minds behind this innovation presses on the fact that the base of the research is the ultimate result of curiosity that imbibes in each and every human being and that is to know whether a machine can generate lines that could sound like the lyrics of my favourite music artists. While working on text generative models, The research team founded that neural networks had the capability to generate some of the most creative and impressive lines of text.
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.26)
- Asia > India (0.06)
- Media > Music (0.37)
- Leisure & Entertainment (0.37)