Goto

Collaborating Authors

 heritage


A 1,000-pound geode, mosasaur skeleton, and head to auction

Popular Science

Bidding in Heritage's Nature & Science Auction begins on December 2. This amethyst geode weighs 1,933 pounds and is 44 inches long. Breakthroughs, discoveries, and DIY tips sent every weekday. Fossils, purple geodes, and more are hitting the auction block during Heritage's Nature & Science Auction on December 2. "This is an exceptional event, with every lot from the same consignor," Craig Kissick, Heritage's Vice President of Nature & Science said in a statement . "Collections like this, with this level of both quality and variety, rarely reach the collecting market. From fossils to meteorites, and minerals to lapidary arts, this auction has treasures that will appeal to collectors of all kinds."


Oitijjo-3D: Generative AI Framework for Rapid 3D Heritage Reconstruction from Street View Imagery

Ope, Momen Khandoker, Islam, Akif, Ameen, Mohd Ruhul, Miah, Abu Saleh Musa, Islam, Md Rashedul, Shin, Jungpil

arXiv.org Artificial Intelligence

Cultural heritage restoration in Bangladesh faces a dual challenge of limited resources and scarce technical expertise. Traditional 3D digitization methods, such as photogrammetry or LiDAR scanning, require expensive hardware, expert operators, and extensive on-site access, which are often infeasible in developing contexts. As a result, many of Bangladesh's architectural treasures, from the Paharpur Buddhist Monastery to Ahsan Manzil, remain vulnerable to decay and inaccessible in digital form. This paper introduces Oitijjo-3D, a cost-free generative AI framework that democratizes 3D cultural preservation. By using publicly available Google Street View imagery, Oitijjo-3D reconstructs faithful 3D models of heritage structures through a two-stage pipeline - multimodal visual reasoning with Gemini 2.5 Flash Image for structure-texture synthesis, and neural image-to-3D generation through Hexagen for geometry recovery. The system produces photorealistic, metrically coherent reconstructions in seconds, achieving significant speedups compared to conventional Structure-from-Motion pipelines, without requiring any specialized hardware or expert supervision. Experiments on landmarks such as Ahsan Manzil, Choto Sona Mosque, and Paharpur demonstrate that Oitijjo-3D preserves both visual and structural fidelity while drastically lowering economic and technical barriers. By turning open imagery into digital heritage, this work reframes preservation as a community-driven, AI-assisted act of cultural continuity for resource-limited nations.


Generative AI in Heritage Practice: Improving the Accessibility of Heritage Guidance

Witte, Jessica, Lee, Edmund, Brausem, Lisa, Shillabeer, Verity, Bonacchi, Chiara

arXiv.org Artificial Intelligence

This paper discusses the potential for integrating Generative Artificial Intelligence (GenAI) into professional heritage practice with the aim of enhancing the accessibility of public-facing guidance documents. We developed HAZEL, a GenAI chatbot fine-tuned to assist with revising written guidance relating to heritage conservation and interpretation. Using quantitative assessments, we compare HAZEL's performance to that of ChatGPT (GPT-4) in a series of tasks related to the guidance writing process. The results of this comparison indicate a slightly better performance of HAZEL over ChatGPT, suggesting that the GenAI chatbot is more effective once the underlying large language model (LLM) has been fine-tuned. However, we also note significant limitations, particularly in areas requiring cultural sensitivity and more advanced technical expertise. These findings suggest that, while GenAI cannot replace human heritage professionals in technical authoring tasks, its potential to automate and expedite certain aspects of guidance writing could offer valuable benefits to heritage organisations, especially in resource-constrained contexts.


Quantum est in Libris: Navigating Archives with GenAI, Uncovering Tension Between Preservation and Innovation

Sola, Mar Canet, Guljajeva, Varvara

arXiv.org Artificial Intelligence

"Quantum est in libris" explores the intersection of the archaic and the modern. On one side, there are manuscript materials from the Estonian National Museum's (ERM) more than century-old archive describing the life experiences of Estonian people; on the other side, there is technology that transforms these materials into a dynamic and interactive experience. Connecting technology and cultural heritage is the visitor, who turns texts into inputs for a screen sculpture. Historical narratives are visually brought to life through the contemporary technological language. Because the video AI models we employed, Runway Gen-3 and Gen-4, have not previously interacted with Estonian heritage, we can observe how machines today "read the world" and create future heritage. "Quantum est in libris" introduces an exciting yet unsettling new dimension to the concept of cultural heritage: in a world where data are fluid and interpretations unstable, heritage status becomes fragile. In the digital environment, heritage issues are no longer just about preservation and transmission, but also about representation of the media, machine creativity, and interpretive error. Who or what shapes memory processes and memory spaces, and how?


Modeling Professionalism in Expert Questioning through Linguistic Differentiation

D'Agostino, Giulia, Chen, Chung-Chi

arXiv.org Artificial Intelligence

Professionalism is a crucial yet underexplored dimension of expert communication, particularly in high-stakes domains like finance. This paper investigates how linguistic features can be leveraged to model and evaluate professionalism in expert questioning. We introduce a novel annotation framework to quantify structural and pragmatic elements in financial analyst questions, such as discourse regulators, prefaces, and request types. Using both human-authored and large language model (LLM)-generated questions, we construct two datasets: one annotated for perceived professionalism and one labeled by question origin. We show that the same linguistic features correlate strongly with both human judgments and authorship origin, suggesting a shared stylistic foundation. Furthermore, a classifier trained solely on these interpretable features outperforms gemini-2.0 and SVM baselines in distinguishing expert-authored questions. Our findings demonstrate that professionalism is a learnable, domain-general construct that can be captured through linguistically grounded modeling.


AI-based Decision Support System for Heritage Aircraft Corrosion Prevention

Kuchař, Michal, Fišer, Jaromír, Oswald, Cyril, Vyhlídal, Tomáš

arXiv.org Artificial Intelligence

The paper presents a decision support system for the long-term preservation of aeronautical heritage exhibited/stored in sheltered sites. The aeronautical heritage is characterized by diverse materials of which this heritage is constituted. Heritage aircraft are made of ancient aluminum alloys, (ply)wood, and particularly fabrics. The decision support system (DSS) designed, starting from a conceptual model, is knowledge-based on degradation/corrosion mechanisms of prevailing materials of aeronautical heritage. In the case of historical aircraft wooden parts, this knowledge base is filled in by the damage function models developed within former European projects. Model-based corrosion prediction is implemented within the new DSS for ancient aluminum alloys. The novelty of this DSS consists of supporting multi-material heritage protection and tailoring to peculiarities of aircraft exhibition/storage hangars and the needs of aviation museums. The novel DSS is tested on WWII aircraft heritage exhibited in the Aviation Museum Kbely, Military History Institute Prague, Czech Republic.


A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas

Venkit, Pranav Narayanan, Li, Jiayi, Zhou, Yingfan, Rajtmajer, Sarah, Wilson, Shomir

arXiv.org Artificial Intelligence

As LLMs (large language models) are increasingly used to generate synthetic personas--particularly in data-limited domains such as health, privacy, and HCI--it becomes necessary to understand how these narratives represent identity, especially that of minority communities. In this paper, we audit synthetic personas generated by 3 LLMs (GPT4o, Gemini 1.5 Pro, Deepseek v2.5) through the lens of representational harm, focusing specifically on racial identity. Using a mixed-methods approach combining close reading, lexical analysis, and a parameterized creativity framework, we compare 1,512 LLM-generated persona to human-authored responses. Our findings reveal that LLMs disproportionately foreground racial markers, overproduce culturally coded language, and construct personas that are syntactically elaborate yet nar-ratively reductive. These patterns result in a range of so-ciotechnical harms--including stereotyping, exoticism, erasure, and benevolent bias--that are often obfuscated by superficially positive narrations. We formalize this phenomenon as algorithmic othering, where minoritized identities are rendered hypervisible but less authentic. Based on these findings, we offer design recommendations for narrative-aware evaluation metrics and community-centered validation protocols for synthetic identity generation.


Baby names associated with intelligence are dying out, study reveals - so, is yours at risk of extinction?

Daily Mail - Science & tech

It's one of the most difficult decisions a new parent can make – what shall we call our baby? Now, a huge analysis has revealed that names associated with intelligence are dying out, while those linked to beauty, elegance or strength are on the up. The study, carried out by The Economist, scrutinised the names of nearly 400 million infants born in Britain and the US over the last 143 years. Researchers used a large language model – the type of AI that powers the likes of ChatGPT – for their analysis. They fed it with an enormous amount of text taken from the internet and asked it to identify the five most common terms linked with each name.


Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays

Lee, Jinsook, Alvero, AJ, Joachims, Thorsten, Kizilcec, René

arXiv.org Artificial Intelligence

People are increasingly using technologies equipped with large language models (LLM) to write texts for formal communication, which raises two important questions at the intersection of technology and society: Who do LLMs write like (model alignment); and can LLMs be prompted to change who they write like (model steerability). We investigate these questions in the high-stakes context of undergraduate admissions at a selective university by comparing lexical and sentence variation between essays written by 30,000 applicants to two types of LLM-generated essays: one prompted with only the essay question used by the human applicants; and another with additional demographic information about each applicant. We consistently find that both types of LLM-generated essays are linguistically distinct from human-authored essays, regardless of the specific model and analytical approach. Further, prompting a specific sociodemographic identity is remarkably ineffective in aligning the model with the linguistic patterns observed in human writing from this identity group. This holds along the key dimensions of sex, race, first-generation status, and geographic location. The demographically prompted and unprompted synthetic texts were also more similar to each other than to the human text, meaning that prompting did not alleviate homogenization. These issues of model alignment and steerability in current LLMs raise concerns about the use of LLMs in high-stakes contexts.


Adaptive Graph of Thoughts: Test-Time Adaptive Reasoning Unifying Chain, Tree, and Graph Structures

Pandey, Tushar, Ghukasyan, Ara, Goktas, Oktay, Radha, Santosh Kumar

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have demonstrated impressive reasoning capabilities, yet their performance is highly dependent on the prompting strategy and model scale. While reinforcement learning and fine-tuning have been deployed to boost reasoning, these approaches incur substantial computational and data overhead. In this work, we introduce Adaptive Graph of Thoughts (AGoT), a dynamic, graph-based inference framework that enhances LLM reasoning solely at test time. Rather than relying on fixed-step methods like Chain of Thought (CoT) or Tree of Thoughts (ToT), AGoT recursively decomposes complex queries into structured subproblems, forming an dynamic directed acyclic graph (DAG) of interdependent reasoning steps. By selectively expanding only those subproblems that require further analysis, AGoT unifies the strengths of chain, tree, and graph paradigms into a cohesive framework that allocates computation where it is most needed. We validate our approach on diverse benchmarks spanning multi-hop retrieval, scientific reasoning, and mathematical problem-solving, achieving up to 46.2% improvement on scientific reasoning tasks (GPQA) - comparable to gains achieved through computationally intensive reinforcement learning approaches and outperforming state-of-the-art iterative approaches. These results suggest that dynamic decomposition and structured recursion offer a scalable, cost-effective alternative to post-training modifications, paving the way for more robust, general-purpose reasoning in LLMs.