creativity
Electronic artist and YouTuber Look Mum No Computer to represent UK at Eurovision
Electronic music artist and tech creator Look Mum No Computer has been chosen to represent the UK at this year's Eurovision Song Contest in Vienna, the BBC has announced. Look Mum No Computer is a solo artist, songwriter and YouTuber, who is also described as an inventor of unique musical machines. The singer first arrived on the music scene back in 2014 as Sam Battle, frontman of indie rock band Zibra. The group performed at Glastonbury in 2015 for BBC Introducing. Since then, he has been performing and recording under his solo name.
- Europe > Austria > Vienna (0.26)
- North America > United States (0.16)
- North America > Central America (0.15)
- (15 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
A backlash against AI imagery in ads may have begun as brands promote 'human-made'
A backlash against AI imagery in ads may have begun as brands promote'human-made' In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artificial intelligence (AI), celebrating their work as "human-made". But in these advertising campaigns on TV, billboards on New York streets and on social media, the companies are signalling something larger. Even Apple's new series release, Pluribus, includes the phrase "Made by Humans" in the closing credits. Other brands including H&M and Guess have faced a backlash for using AI brand ambassadors instead of humans. These gestures suggest we have reached a cultural moment in the evolution of this technology, where people are unsure what creativity means when machines can now produce much of what we see, hear and perhaps even be moved by.
- North America > United States > New York (0.25)
- Oceania > Australia > Queensland (0.05)
- North America > Canada (0.05)
- (2 more...)
- Media (0.56)
- Leisure & Entertainment (0.35)
Disney and OpenAI have made a surprise deal – what happens next?
Disney and OpenAI have made a surprise deal - what happens next? Disney's famous Mickey Mouse character will soon be available for use in AI-generated videos The world's best-known AI company and the world's best-known entertainment firm have come to a surprise agreement to allow AI versions of some of the most iconic characters in film, TV and cartoons to be used in generative AI videos and images. Social media is dead - here's what comes next The Walt Disney Company has signed a deal with OpenAI that will allow the AI firm's Sora video generation tool and ChatGPT image creator to use more than 200 of Disney's most iconic characters. Meanwhile, Disney remains in dispute with another AI firm, Midjourney, over alleged infringement of their intellectual property (IP), claiming Midjourney aims to "blatantly incorporate and copy Disney's and Universal's famous characters" into their image generating tool. The characters now deemed fair game for OpenAI users include the likes of Mickey and Minnie Mouse, Simba and Mufasa from and Moana, as well as Marvel and Lucasfilm characters, including some of's most well-known names.
- Europe > United Kingdom > Wales (0.05)
- Europe > United Kingdom > England > Staffordshire (0.05)
- Leisure & Entertainment (1.00)
- Media > Film (0.55)
- Law > Intellectual Property & Technology Law (0.35)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Persona-based Multi-Agent Collaboration for Brainstorming
Straub, Nate, Khan, Saara, Jay, Katharina, Cabral, Brian, Linde, Oskar
Abstract--We demonstrate the importance of persona-based multi-agents brainstorming for both diverse topics and subject matter ideation. Prior work has shown that generalized multi-agent collaboration often provides better reasoning than a single agent alone [1]. In this paper, we propose and develop a framework for persona-based agent selection, showing how persona domain curation can improve brainstorming outcomes. Using multiple experimental setups, we evaluate brainstorming outputs across different persona pairings (e.g., Doctor vs VR Engineer) and A2A (agent-to-agent) dynamics (separate, together, separate-then-together). Our results show that (1) persona choice shapes idea domains, (2) collaboration mode shifts diversity of idea generation, and (3) multi-agent persona-driven brainstorming produces idea depth and cross-domain coverage. Brainstorming has historically been a human-centered activity where diverse individuals bring unique knowledge and perspectives to generate novel ideas. Locke's theory of knowledge formation emphasizes that combining and abstracting experiences across multiple people leads to more complex ideas. Similarly, since the 1950s and '60s, design thinking frameworks emphasize the importance of multiple participants generating and refining ideas through structured exploration of brainstorming to generate ideas for a pre-defined question [2]. These design thinking frameworks use a set of cognitive, strategic, and practical procedures for ideation [2] and for this paper we focus on'brainstorming' as an area of exploration for multi-agent collaboration. Brainstorming is normally done with multiple and diverse humans standing at a whiteboard together brainstorming ideas against a topic area that is put on the whiteboard.
- Health & Medicine (1.00)
- Education (0.93)
AI Co-Artist: A LLM-Powered Framework for Interactive GLSL Shader Animation Evolution
Yuksel, Kamer Ali, Sawaf, Hassan
Creative coding and real-time shader programming are at the forefront of interactive digital art, enabling artists, designers, and enthusiasts to produce mesmerizing, complex visual effects that respond to real-time stimuli such as sound or user interaction. However, despite the rich potential of tools like GLSL, the steep learning curve and requirement for programming fluency pose substantial barriers for newcomers and even experienced artists who may not have a technical background. In this paper, we present AI Co-Artist, a novel interactive system that harnesses the capabilities of large language models (LLMs), specifically GPT-4, to support the iterative evolution and refinement of GLSL shaders through a user-friendly, visually-driven interface. Drawing inspiration from the user-guided evolutionary principles pioneered by the Picbreeder platform, our system empowers users to evolve shader art using intuitive interactions, without needing to write or understand code. AI Co-Artist serves as both a creative companion and a technical assistant, allowing users to explore a vast generative design space of real-time visual art. Through comprehensive evaluations, including structured user studies and qualitative feedback, we demonstrate that AI Co-Artist significantly reduces the technical threshold for shader creation, enhances creative outcomes, and supports a wide range of users in producing professional-quality visual effects. Furthermore, we argue that this paradigm is broadly generalizable. By leveraging the dual strengths of LLMs--semantic understanding and program synthesis--our method can be applied to diverse creative domains, including website layout generation, architectural visualizations, product prototyping, and infograph-ics. We also explore whether human curators in the interactive process could be replaced or augmented with multimodal vision-language models acting as autonomous aesthetic judges to allow closed-loop evolution.
- North America > United States > New York (0.04)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- Questionnaire & Opinion Survey (0.55)
- Research Report (0.50)
Emovectors: assessing emotional content in jazz improvisations for creativity evaluation
Music improvisation is fascinating to study, being essentially a live demonstration of a creative process. In jazz, musicians often improvise across predefined chord progressions (leadsheets). How do we assess the creativity of jazz improvisations? And can we capture this in automated metrics for creativity for current LLM-based generative systems? Demonstration of emotional involvement is closely linked with creativity in improvisation. Analysing musical audio, can we detect emotional involvement? This study hypothesises that if an improvisation contains more evidence of emotion-laden content, it is more likely to be recognised as creative. An embeddings-based method is proposed for capturing the emotional content in musical improvisations, using a psychologically-grounded classification of musical characteristics associated with emotions. Resulting 'emovectors' are analysed to test the above hypothesis, comparing across multiple improvisations. Capturing emotional content in this quantifiable way can contribute towards new metrics for creativity evaluation that can be applied at scale.
- Europe > United Kingdom > England > Kent > Canterbury (0.40)
- North America > United States > District of Columbia > Washington (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > Poland (0.04)
- (2 more...)
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- (3 more...)
Rethinking AI Evaluation in Education: The TEACH-AI Framework and Benchmark for Generative AI Assistants
As generative artificial intelligence (AI) continues to transform education, most existing AI evaluations rely primarily on technical performance metrics such as accuracy or task efficiency while overlooking human identity, learner agency, contextual learning processes, and ethical considerations. In this paper, we present TEACH-AI (Trustworthy and Effective AI Classroom Heuristics), a domain-independent, pedagogically grounded, and stakeholder-aligned framework with measurable indicators and a practical toolkit for guiding the design, development, and evaluation of generative AI systems in educational contexts. Built on an extensive literature review and synthesis, the ten-component assessment framework and toolkit checklist provide a foundation for scalable, value-aligned AI evaluation in education. TEACH-AI rethinks "evaluation" through sociotechnical, educational, theoretical, and applied lenses, engaging designers, developers, researchers, and policymakers across AI and education. Our work invites the community to reconsider what constructs "effective" AI in education and to design model evaluation approaches that promote co-creation, inclusivity, and long-term human, social, and educational impact.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (2 more...)
- Education > Educational Setting (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.94)
- Education > Curriculum > Subject-Specific Education (0.94)
- Health & Medicine (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.57)
Leveraging LLMs for Design Ideation: An AI Tool to Assist Creativity
Kokate, Rutvik, Kompella, Pranati, Onkar, Prasad
The creative potential of computers has intrigued researchers for decades. Since the emergence of Generative AI (Gen AI), computer creativity has found many new dimensions and applications. As Gen AI permeates mainstream discourse and usage, researchers are delving into how it can improve and complement what humans do. Creative potential is a highly relevant notion to design practice and research, especially in the initial stages of ideation and conceptualisation. There is scope to improve creative potential in these stages, especially using machine intelligence. We propose a structured ideation session involving inspirational stimuli and utilise Gen AI in delivering this structure to designers through ALIA: Analogical LLM Ideation Agent, a tool for small-group ideation scenarios. The tool is developed by enabling speech based interactions with a Large Language Model (LLM) for inference generation. Inspiration is drawn from the synectic ideation method and the dialectics philosophy to design the optimal stimuli in group ideation. The tool is tested in design ideation sessions to compare the output of the AI-assisted ideation sessions to that of tradi tional ideation sessions. Preliminary findings showcase that participants have rated their ideas better when assisted by ALIA and respond favourably to speech-based interactions.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Indiana > Madison County > Anderson (0.04)
- (3 more...)
- Research Report > New Finding (0.68)
- Research Report > Promising Solution (0.46)
Universe of Thoughts: Enabling Creative Reasoning with Large Language Models
Suzuki, Yuto, Banaei-Kashani, Farnoush
Reasoning based on Large Language Models (LLMs) has garnered increasing attention due to outstanding performance of these models in mathematical and complex logical tasks. Beginning with the Chain-of-Thought (CoT) prompting technique, numerous reasoning methods have emerged that decompose problems into smaller, sequential steps (or thoughts). However, existing reasoning models focus on conventional problem-solving and do not necessarily generate creative solutions by ``creative reasoning''. In domains where the solution space is expansive and conventional solutions are suboptimal, such as drug discovery or business strategization, creative reasoning to discover innovative solutions is crucial. To address this gap, first we introduce a computational framework for creative reasoning inspired by established cognitive science principles. With this framework, we propose three core creative reasoning paradigms, namely, \textit{combinational}, \textit{exploratory}, and \textit{transformative} reasoning, where each offers specific directions for systematic exploration of the universe of thoughts to generate creative solutions. Next, to materialize this framework using LLMs, we introduce the \textit{Universe of Thoughts} (or \textit{UoT}, for short), a novel set of methods to implement the aforementioned three creative processes. Finally, we introduce three novel tasks that necessitate creative problem-solving, along with an evaluation benchmark to assess creativity from three orthogonal perspectives: feasibility as constraint, and utility and novelty as metrics. With a comparative analysis against the state-of-the-art (SOTA) reasoning techniques as well as representative commercial models with reasoning capability, we show that UoT demonstrates superior performance in creative reasoning.
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)