metaphor
AI's Memorization Crisis
Large language models don't "learn"--they copy. And that could change everything for the tech industry. O n Tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models--OpenAI's GPT, Anthropic's Claude, Google's Gemini, and xAI's Grok--have stored large portions of some of the books they've been trained on, and can reproduce long excerpts from those books. In fact, when prompted strategically by researchers, Claude delivered the near-complete text of,,, and, in addition to thousands of words from books including and .
- North America > United States > California (0.04)
- Europe > Germany (0.04)
- Information Technology (0.67)
- Law > Intellectual Property & Technology Law (0.31)
Can a new book crack one of neuroscience's hardest problems? Not quite
The ideas presented in George Lakoff and Srini Narayanan's The Neural Mind are fascinating, but the writing is far less compelling This is a book review in two parts. The first is about the ideas presented in The Neural Mind: How brains think, which are fascinating. The second is about the actual experience of reading it. The book tackles one of the biggest questions in neuroscience: how do neurons perform all the different kinds of human thought possible, from planning motor actions to composing sentences and musing about philosophy? The authors have very different perspectives.
- Europe > Switzerland > Zürich > Zürich (0.15)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Europe > United Kingdom > England > Devon (0.05)
Cozy up (safely) to an e-scooter's lithium battery yule log
Breakthroughs, discoveries, and DIY tips sent every weekday. The United States Consumer Product Safety Commission (CPSC) is well known for getting their point across on social media. A seven-minute montage of mannequins succumbing to 4th of July firework injuries may be an unconventional way to warn about the dangers of recreational explosives--but try forgetting those images when lighting your next bottle rocket. In similar pyrotechnic fashion, the CPSC is warning everyone to take extra care during the holidays when it comes to all kinds of combustible, seasonally appropriate objects. On December 22, the commission illustrated how some gifts are far more flammable than others with its 30-minute Escooter Lithium-Ion Battery Yule Log video.
- North America > United States (0.57)
- Oceania > New Zealand (0.05)
- Electrical Industrial Apparatus (0.88)
- Consumer Products & Services (0.73)
- Energy > Energy Storage (0.73)
Better images of AI on book covers
'Learning with AI' is an open-source book from the University of Leeds . We spoke with Chrissi Nerantzi, part of the project team about their choice to use Ariyana Ahmad's illustration'AI is Everywhere' for the cover of the book. For the team, the choice of cover was about more than just visual aesthetic. Collages can capture multiple perspectives, textures, and approaches, much like the student voices incorporated throughout the book. Ahmad's illustration, while not a collage, achieves a similar effect.
Review of "Exploring metaphors of AI: visualisations, narratives and perception"
From 10th to 12th September 2025, Barcelona hosted an academic gathering at the Universitat Oberta de Catalunya: the first Hype Studies Conference, titled "(Don't) Believe the Hype!?" Organised by a transnational, collective research group of scholars and practitioners, the conference drew together researchers, activists, artists, journalists, and technology professionals to examine hype as a significant force shaping contemporary society. Hype Studies is an emerging academic field that analyses how and why excessive expectations form around technologies, ideas, or phenomena, and what effects those expectations have on society, culture, economics, and policy. As the playful brackets around "Don't" in the conference title suggest - both a warning and an invitation to question that warning - the aim of the conference wasn't to simply reject hype, but to understand it. The conference approached hype critically by examining it as a phenomenon with real power and consequences that needs to be understood and questioned. The purpose here was to build collective knowledge about hype, develop better and more concrete theories, share empirical findings, and create an interdisciplinary community whilst advancing the field's scholarship and knowledge.
- North America > United States > Florida (0.04)
- Europe > Italy (0.04)
- Asia > Indonesia > Bali (0.04)
- (4 more...)
Dutch Metaphor Extraction from Cancer Patients' Interviews and Forum Data using LLMs and Human in the Loop
Han, Lifeng, Lindevelt, David, Puts, Sander, van Mulligen, Erik, Verberne, Suzan
Metaphors and metaphorical language (MLs) play an important role in healthcare communication between clinicians, patients, and patients' family members. In this work, we focus on Dutch language data from cancer patients. We extract metaphors used by patients using two data sources: (1) cancer patient storytelling interview data and (2) online forum data, including patients' posts, comments, and questions to professionals. We investigate how current state-of-the-art large language models (LLMs) perform on this task by exploring different prompting strategies such as chain of thought reasoning, few-shot learning, and self-prompting. With a human-in-the-loop setup, we verify the extracted metaphors and compile the outputs into a corpus named HealthQuote.NL. We believe the extracted metaphors can support better patient care, for example shared decision making, improved communication between patients and clinicians, and enhanced patient health literacy. They can also inform the design of personalized care pathways. We share prompts and related resources at https://github.com/aaronlifenghan/HealthQuote.NL
- Europe > Netherlands > South Holland > Leiden (0.04)
- Europe > United Kingdom (0.04)
- Europe > Netherlands > Limburg > Maastricht (0.04)
- (3 more...)
Adaptive Originality Filtering: Rejection Based Prompting and RiddleScore for Culturally Grounded Multilingual Riddle Generation
Le, Duy, Ziti, Kent, Girard-Sun, Evan, Bouhaya, Bakr, O'Brien, Sean, Sharma, Vasu, Zhu, Kevin
Language models are increasingly tested on multilingual creativity, demanding culturally grounded, abstract generations. Standard prompting methods often produce repetitive or shallow outputs. We introduce Adaptive Originality Filtering (AOF), a prompting strategy that enforces novelty and cultural fidelity via semantic rejection. To assess quality, we propose RiddleScore, a metric combining novelty, diversity, fluency, and answer alignment. AOF improves Distinct-2 (0.915 in Japanese), reduces Self-BLEU (0.177), and raises RiddleScore (up to +57.1% in Arabic). Human evaluations confirm fluency, creativity, and cultural fit gains. However, improvements vary: Arabic shows greater RiddleScore gains than Distinct-2; Japanese sees similar changes. Though focused on riddles, our method may apply to broader creative tasks. Overall, semantic filtering with composite evaluation offers a lightweight path to culturally rich generation without fine-tuning.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Education (0.46)
- Media > News (0.46)
- Information Technology > Security & Privacy (0.45)
Unveiling LLMs' Metaphorical Understanding: Exploring Conceptual Irrelevance, Context Leveraging and Syntactic Influence
Ye, Fengying, Wang, Shanshan, Chao, Lidia S., Wong, Derek F.
Metaphor analysis is a complex linguistic phenomenon shaped by context and external factors. While Large Language Models (LLMs) demonstrate advanced capabilities in knowledge integration, contextual reasoning, and creative generation, their mechanisms for metaphor comprehension remain insufficiently explored. This study examines LLMs' metaphor-processing abilities from three perspectives: (1) Concept Mapping: using embedding space projections to evaluate how LLMs map concepts in target domains (e.g., misinterpreting "fall in love" as "drop down from love"); (2) Metaphor-Literal Repository: analyzing metaphorical words and their literal counterparts to identify inherent metaphorical knowledge; and (3) Syntactic Sensitivity: assessing how metaphorical syntactic structures influence LLMs' performance. Our findings reveal that LLMs generate 15\%-25\% conceptually irrelevant interpretations, depend on metaphorical indicators in training data rather than contextual cues, and are more sensitive to syntactic irregularities than to structural comprehension. These insights underline the limitations of LLMs in metaphor analysis and call for more robust computational approaches.
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (12 more...)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.34)
Prototyping Digital Social Spaces through Metaphor-Driven Design: Translating Spatial Concepts into an Interactive Social Simulation
Hong, Yoojin, Di Paola, Martina, Padmakumar, Braahmi, Lee, Hwi Joon, Shafiq, Mahnoor, Seering, Joseph
Social media platforms are central to communication, yet their designs remain narrowly focused on engagement and scale. While researchers have proposed alternative visions for online spaces, these ideas are difficult to prototype within platform constraints. In this paper, we introduce a metaphor-driven system to help users imagine and explore new social media environments. The system translates users' metaphors into structured sets of platform features and generates interactive simulations populated with LLM-driven agents. To evaluate this approach, we conducted a study where participants created and interacted with simulated social media spaces. Our findings show that metaphors allow users to express distinct social expectations, and that perceived authenticity of the simulation depended on how well it captured dynamics like intimacy, participation, and temporal engagement. We conclude by discussing how metaphor-driven simulation can be a powerful design tool for prototyping alternative social architectures and expanding the design space for future social platforms.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- Leisure & Entertainment (1.00)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (0.67)
- (2 more...)
Metaphor identification using large language models: A comparison of RAG, prompt engineering, and fine-tuning
Fuoli, Matteo, Huang, Weihang, Littlemore, Jeannette, Turner, Sarah, Wilding, Ellen
Metaphor is a pervasive feature of discourse and a powerful lens for examining cognition, emotion, and ideology. Large-scale analysis, however, has been constrained by the need for manual annotation due to the context-sensitive nature of metaphor. This study investigates the potential of large language models (LLMs) to automate metaphor identification in full texts. We compare three methods: (i) retrieval-augmented generation (RAG), where the model is provided with a codebook and instructed to annotate texts based on its rules and examples; (ii) prompt engineering, where we design task-specific verbal instructions; and (iii) fine-tuning, where the model is trained on hand-coded texts to optimize performance. Within prompt engineering, we test zero-shot, few-shot, and chain-of-thought strategies. Our results show that state-of-the-art closed-source LLMs can achieve high accuracy, with fine-tuning yielding a median F1 score of 0.79. A comparison of human and LLM outputs reveals that most discrepancies are systematic, reflecting well-known grey areas and conceptual challenges in metaphor theory. We propose that LLMs can be used to at least partly automate metaphor identification and can serve as a testbed for developing and refining metaphor identification protocols and the theory that underpins them.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (3 more...)