Goto

Collaborating Authors

 librarian


Librarians can't keep up with bad AI

Popular Science

Technology AI Librarians can't keep up with bad AI From false sources to hallucinations, it's become a major problem. Breakthroughs, discoveries, and DIY tips sent every weekday. Generative artificial intelligence continues to have a problem with hallucinations . Although many responses to user queries are largely accurate, programs like ChatGPT, Google Gemini, and Microsoft Copilot are still prone to offering made-up information and facts . As bad as that is on its own, the issue is further complicated by a tendency for these AI programs to produce seemingly reputable, yet wholly imaginary, sources. But as annoying as that is for millions of users, it's becoming a major issue for the people trusted to provide reliable, real information: librarians.


Fairness Evaluation of Large Language Models in Academic Library Reference Services

Wang, Haining, Clark, Jason, Yan, Yueru, Bradley, Star, Chen, Ruiyang, Zhang, Yiqiong, Fu, Hengyi, Tian, Zuoyu

arXiv.org Artificial Intelligence

As libraries explore large language models (LLMs) for use in virtual reference services, a key question arises: Can LLMs serve all users equitably, regardless of demographics or social status? While they offer great potential for scalable support, LLMs may also reproduce societal biases embedded in their training data, risking the integrity of libraries' commitment to equitable service. To address this concern, we evaluate whether LLMs differentiate responses across user identities by prompting six state-of-the-art LLMs to assist patrons differing in sex, race/ethnicity, and institutional role. We find no evidence of differentiation by race or ethnicity, and only minor evidence of stereotypical bias against women in one model. LLMs demonstrate nuanced accommodation of institutional roles through the use of linguistic choices related to formality, politeness, and domain-specific vocabularies, reflecting professional norms rather than discriminatory treatment. These findings suggest that current LLMs show a promising degree of readiness to support equitable and contextually appropriate communication in academic library reference services.


AI Literacy in UAE Libraries: Assessing Competencies, Training Needs, and Ethical Considerations for the Digital Age

Khan, Zafar Imam

arXiv.org Artificial Intelligence

This is the accepted manuscript version. The final published version will appear in College & Research Libraries, November 2026. AI Literacy in UAE Libraries: Assessing Competencies, Training Needs, and Ethical Considerations for the Digital Age Zafar Imam Khan, Learning Resources Manager, Hamdan Bin Mohammed Smart University, Dubai, United Arab Emirates, Email: zafarimamkhan@gmail.com, https://orcid.org/0000 - 0003 - 2081 - 0951 Abstract The study explores the current state of artificial intelligence (AI) literacy levels among library professionals employing a quantitative approach consisting of 92 surveys of LIS professionals in the United Arab Emirates (UAE). Findings of the study reveal ed the presence of strong cognitive competencies, while there were gaps observed in behavioral and normative competencies, especially related to AI biases, AI - powered learning, and ethical considerations. There was a disconnect observed between the perceiv ed importance of AI skills and the effectiveness of the current training programs. Introduction Generative AI has created massive disruption in all sectors, such as manufacturing, services, agriculture, medicine, and education, and has transformed a range of operations and services. Libraries are transforming and gearing up to harness the power of AI, which can enhance efficiency, accessibility, and personalization of services; thereby reshaping the traditional library landscape. This transformation has been observed in several of the traditional library services as AI is automating routine tasks such as cataloguing and classification of collections, and enhancing search functionalities and information retrieval, thereby creating a much more accurate and organized library system while librarians have more time to focus on intellectually stimulating act ivities (Preethi, 2024). There is a race to integrate AI into library services at a global level, and this has presented both opportunities and challenges in terms of AI literacy among library professionals. AI literacy involves understanding of AI tools, their applications, and ethical considerations surrounding their use.


Trump admin fires top US copyright official days after terminating Librarian of Congress

FOX News

An AI art lecturer said he believes the U.S. government would encounter difficulty if it attempted to establish a watermark system for AI-generated content. Trump fired Librarian of Congress Carla Hayden, who was the first woman and first African American to be Librarian of Congress, on Thursday. The termination was part of the administration's ongoing purge of government officials who are perceived to be opposed to Trump and his agenda. The White House did not immediately respond to Fox News Digital's requests for comment on the matter. Like Perlmutter, Hayden was notified of her firing in an email, according to The Associated Press.


Planning in the Dark: LLM-Symbolic Planning Pipeline without Experts

Huang, Sukai, Lipovetzky, Nir, Cohn, Trevor

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have shown promise in solving natural language-described planning tasks, but their direct use often leads to inconsistent reasoning and hallucination. While hybrid LLM-symbolic planning pipelines have emerged as a more robust alternative, they typically require extensive expert intervention to refine and validate generated action schemas. It not only limits scalability but also introduces a potential for biased interpretation, as a single expert's interpretation of ambiguous natural language descriptions might not align with the user's actual intent. To address this, we propose a novel approach that constructs an action schema library to generate multiple candidates, accounting for the diverse possible interpretations of natural language descriptions. We further introduce a semantic validation and ranking module that automatically filter and rank the generated schemas and plans without expert-in-the-loop. The experiments showed our pipeline maintains superiority in planning over the direct LLM planning approach. These findings demonstrate the feasibility of a fully automated end-to-end LLM-symbolic planner that requires no expert intervention, opening up the possibility for a broader audience to engage with AI planning with less prerequisite of domain expertise.


What the evolution of our own brains can tell us about the future of AI

Engadget

The explosive growth in artificial intelligence in recent years -- crowned with the meteoric rise of generative AI chatbots like ChatGPT -- has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right. In this week's Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain. Focusing on the five evolutionary "breakthroughs," amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can't quite get a grasp on the vagaries of human speech.


Narcan, rare books and citizenship: How L.A.'s chief librarian is meeting the city's needs

Los Angeles Times

The sparrows fled the courtyard. It was quiet amid the classics. John Szabo stepped out of the elevator and walked through the sunlit atrium of the Central Library. He passed a slumbering homeless man and, with the efficiency of a spy, disappeared into stacks of bound archives, hundreds of thousands of relevant and obscure pages -- including the 1991 "Journal of the American Chamber of Commerce in Japan." A tall man with sparks of gray in his goatee, Szabo, the city librarian, oversees 72 branches, a $241.8 million budget, 17,000 restaurant menus, 64 ukuleles, a Shakespeare volume from 1685, and lockers of puppets for a children's theater. He stopped at a shelf holding years of "Family Handyman" magazines. Founded in 1951 for those who grout tile and hang cabinets, the periodical was no match for Prince Harry's memoir or a Stephen King novel.


Help! My Husband Is Floundering Under The Weight of All the Chores I Refuse to Do.

Slate

This week, we're helping you round out your summer reading lists by asking some of our favorite authors to step in as Prudie for the day and give you advice. This is part of our Guest Prudie series. Today's columnist is American author and "King of Horror" Stephen King, whose renowned for his horror, supernatural fiction, suspense, crime, science-fiction, and fantasy novels, including It, The Shining, Carrie, and many more. His iconic books and stories have been adapted into numerous films and television series--including The Boogeyman which was released just last month. His new novel, Holly, hits shelves this coming September.


ChatGPT's Storytelling Chops Are No Match for Dungeons & Dragons

WIRED

Our overeager party--an elvish druid; a dwarven wizard; a halfling rogue; and a human paladin--has arrived at a dusty, cluttered library. Hearing of our quest for the fabled Orb of Zarekath, the head librarian--Thimblewick, a gnome--recounts how it was once "a powerful artifact" that has long since disappeared in the nearby ruined city. But the rogue is less curious about Orb-lore and more interested in snooping and stealing from the nearby shelves. Sneaking into the shadows, he's caught by a librarian. "Oh, sorry," the rogue says with a disarming smile.


Bias, diversity, and challenges to fairness in classification and automated text analysis. From libraries to AI and back

Berendt, Bettina, Karadeniz, Özgür, Kıyak, Sercan, Mertens, Stefan, d'Haenens, Leen

arXiv.org Artificial Intelligence

Libraries are increasingly relying on computational methods, including methods from Artificial Intelligence (AI). This increasing usage raises concerns about the risks of AI that are currently broadly discussed in scientific literature, the media and law-making. In this article we investigate the risks surrounding bias and unfairness in AI usage in classification and automated text analysis within the context of library applications. We describe examples that show how the library community has been aware of such risks for a long time, and how it has developed and deployed countermeasures. We take a closer look at the notion of '(un)fairness' in relation to the notion of 'diversity', and we investigate a formalisation of diversity that models both inclusion and distribution. We argue that many of the unfairness problems of automated content analysis can also be regarded through the lens of diversity and the countermeasures taken to enhance diversity.