Goto

Collaborating Authors

 co-author


A professor kept a pet worm for 20 years. It just set a record.

Popular Science

Environment Animals Wildlife A professor kept a pet worm for 20 years. It just set a record. Baseodiscus the Eldest lives a chill life in Virginia. Breakthroughs, discoveries, and DIY tips sent every weekday. Jonathan Allen, a biology professor at The College of William & Mary in Virginia, has a very strange pet: a very long ribbon worm () named Baseodiscus the Eldest, or just B for short.


The Chatbot-Delusion Crisis

The Atlantic - Technology

Researchers are scrambling to figure out why generative AI appears to lead some people to a state of "psychosis." Listen to more stories on the Noa app. Chatbots are marketed as great companions, able to answer any question at any time. They're not just tools, but confidants; they do your homework, write love notes, and, as one recent lawsuit against OpenAI details, might readily answer 1,460 messages from the same manic user in a 48-hour period. Jacob Irwin, a 30-year-old cybersecurity professional who says he has no previous history of psychiatric incidents, is suing the tech company, alleging that ChatGPT sparked a "delusional disorder" that led to his extended hospitalization.


AI videos of animals could be dangerous. Here's how to spot them.

Popular Science

Technology AI AI videos of animals could be dangerous. Here's how to spot them. Researchers warn that they can distort our connection to wildlife. Breakthroughs, discoveries, and DIY tips sent every weekday. It happens more and more frequently.


Remembering Unequally: Global and Disciplinary Bias in LLM-Generated Co-Authorship Networks

Kalhor, Ghazal, Mashhadi, Afra

arXiv.org Artificial Intelligence

Ongoing breakthroughs in Large Language Models (LLMs) are reshaping search and recommendation platforms at their core. While this shift unlocks powerful new scientometric tools, it also exposes critical fairness and bias issues that could erode the integrity of the information ecosystem. Additionally, as LLMs become more integrated into web-based searches for scholarly tools, their ability to generate summarized research work based on memorized data introduces new dimensions to these challenges. The extent of memorization in LLMs can impact the accuracy and fairness of the co-authorship networks they produce, potentially reflecting and amplifying existing biases within the scientific community and across different regions. This study critically examines the impact of LLM memorization on the co-authorship networks. To this end, we assess memorization effects across three prominent models, DeepSeek R1, Llama 4 Scout, and Mixtral 8x7B, analyzing how memorization-driven outputs vary across academic disciplines and world regions. While our global analysis reveals a consistent bias favoring highly cited researchers, this pattern is not uniformly observed. Certain disciplines, such as Clinical Medicine, and regions, including parts of Africa, show more balanced representation, pointing to areas where LLM training data may reflect greater equity. These findings underscore both the risks and opportunities in deploying LLMs for scholarly discovery.


Scientists reveal exactly what a neanderthal human hybrid would look like

Daily Mail - Science & tech

It has been over 40,000 years since the last of the Neanderthals, our ancient human cousins, disappeared from the Earth. But from the shape of your nose to whether someone is an early riser, Neanderthal genes are still shaping many of our lives today. Starting from around 250,000 years ago, ancient homo sapiens and Neanderthals met, lived alongside each other, and often had children together. Now, MailOnline has asked leading paleoanthropologists to reveal what those hybrid children would have looked like. Scientists believe that hybrid children would inherit traits from both of their parents.


The Race-Science Blogger Cited by The New York Times

The Atlantic - Technology

Lasker, the Times explained, was the "intermediary" who tipped off the publication about Mamdani's application, which was included in a larger hack of Columbia's computer systems. After the Times published its story, Lasker celebrated on X. "I break-uh dah news," he wrote to his more than 260,000 followers. On both X and Substack, where he also has a large following, Lasker is best-known for compiling charts on the "Black-White IQ gap" and otherwise linking race to real-world outcomes. He seems convinced that any differences are the result of biology, and has shot down other possible explanations. He has suggested that crime is genetic.


The chatbot optimisation game: can we trust AI web searches?

The Guardian

The potentially carcinogenic properties of the popular artificial sweetener, added to everything from soft drinks to children's medicine, have been debated for decades. Its approval in the US stirred controversy in 1974, several UK supermarkets banned it from their products in the 00s, and peer-reviewed academic studies have long butted heads. Last year, the World Health Organization concluded aspartame was "possibly carcinogenic" to humans, while public health regulators suggest that it's safe to consume in the small portions in which it is commonly used. While many of us may look to settle the question with a quick Google search, this is exactly the sort of contentious debate that could cause problems for the internet of the future. As generative AI chatbots have rapidly developed over the past couple of years, tech companies have been quick to hype them as a utopian replacement for various jobs and services – including internet search engines.


Hundreds of millions of US research dollars may have aided Chinese military technology, GOP-led report says

FOX News

House Republicans argue in a new congressional report that hundreds of millions of dollars in federal research funding over the last decade has contributed to China's military technological advancements. Collaborations between U.S. and Chinese academics have led to research publications related to advanced research on topics like hypersonics, directed energy, nuclear and high energy physics, and artificial intelligence and autonomy. That information, Republicans argue, could be weaponized against the U.S. in the event of war with China. Some of the collaborative research they identified related to military applications like high-performance explosives, tracking of targets and drone operation networks. The House Select Committee on China Competition, together with the Education and Workforce Committee, found some 9,000 joint research publications that were funded either through the Department of Defense (DOD) or the Intelligence Community (IC) published by co-authors with ties to China's "defense and security apparatus," including entities that are on a Commerce Department blacklist.


Experts warn AI could generate 'major epidemics or even pandemics' -- but how soon?

FOX News

Experts researching advancements in artificial intelligence are now warning that AI models could create the next "enhanced pathogens capable of causing major epidemics or even pandemics." The declaration was made in a paper published in the journal Science by co-authors from Johns Hopkins University, Stanford University and Fordham University, who say that AI models are being "trained on or [are] capable of meaningfully manipulating substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields." "But as with any powerful new technology, such biological models will also pose considerable risks. Because of their general-purpose nature, the same biological model able to design a benign viral vector to deliver gene therapy could be used to design a more pathogenic virus capable of evading vaccine-induced immunity," researchers wrote in their abstract. "Voluntary commitments among developers to evaluate biological models' potential dangerous capabilities are meaningful and important but cannot stand alone," the paper continued. "We propose that national governments, including the United States, pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics."


White faces generated by AI are more convincing than photos, finds survey

The Guardian

It sounds like a scenario straight out of a Ridley Scott film: technology that not only sounds more "real" than actual humans, but looks more convincing, too. Yet it seems that moment has already arrived. A new study has found people are more likely to think pictures of white faces generated by AI are human than photographs of real individuals. "Remarkably, white AI faces can convincingly pass as more real than human faces – and people do not realise they are being fooled," the researchers report. The team, which includes researchers from Australia, the UK and the Netherlands, said their findings had important implications in the real world, including in identity theft, with the possibility that people could end up being duped by digital impostors.