Goto

Collaborating Authors

 newsguard


AI images of Maduro capture reap millions of views on social media

The Guardian

A supporter of Maduro holds a painting of him in Caracas. A supporter of Maduro holds a painting of him in Caracas. Minutes after Donald Trump announced a "large-scale strike" against Venezuela early on Saturday morning, false and misleading AI-generated images began flooding social media. There were fake photos of Nicolás Maduro being escorted off a plane by US law enforcement agents, images of jubilant Venezuelans pouring into the streets of Caracas and videos of missiles raining down on the city - all fake. The fabricated content intermixed with real videos and photos of US aircraft flying over the Venezuelan capital and explosions lighting up the dark sky.


Why Are Chatbots Parroting Russian Propaganda?

TIME - Tech

Why Are Chatbots Parroting Russian Propaganda? Welcome back to, TIME's new twice-weekly newsletter about AI. Starting today, we'll be publishing these editions both as stories on Time.com and as emails. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? What to Know: Why are chatbots parroting Russian disinformation? Over the last year, as chatbots have gained the ability to search the internet before providing an answer, the likelihood that they will share false information about specific topics in the news has gone up, according to new research by NewsGuard Technologies.


Is Russia really 'grooming' Western AI?

Al Jazeera

In March, NewsGuard – a company that tracks misinformation – published a report claiming that generative Artificial Intelligence (AI) tools, such as ChatGPT, were amplifying Russian disinformation. NewsGuard tested leading chatbots using prompts based on stories from the Pravda network – a group of pro-Kremlin websites mimicking legitimate outlets, first identified by the French agency Viginum. The results were alarming: Chatbots "repeated false narratives laundered by the Pravda network 33 percent of the time", the report said. The Pravda network, which has a rather small audience, has long puzzled researchers. Some believe that its aim was performative – to signal Russia's influence to Western observers.


Decoding AI Judgment: How LLMs Assess News Credibility and Bias

Loru, Edoardo, Nudo, Jacopo, Di Marco, Niccolò, Cinelli, Matteo, Quattrociocchi, Walter

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are increasingly used to assess news credibility, yet little is known about how they make these judgments. While prior research has examined political bias in LLM outputs or their potential for automated fact-checking, their internal evaluation processes remain largely unexamined. Understanding how LLMs assess credibility provides insights into AI behavior and how credibility is structured and applied in large-scale language models. This study benchmarks the reliability and political classifications of state-of-the-art LLMs - Gemini 1.5 Flash (Google), GPT-4o mini (OpenAI), and LLaMA 3.1 (Meta) - against structured, expert-driven rating systems such as NewsGuard and Media Bias Fact Check. Beyond assessing classification performance, we analyze the linguistic markers that shape LLM decisions, identifying which words and concepts drive their evaluations. We uncover patterns in how LLMs associate credibility with specific linguistic features by examining keyword frequency, contextual determinants, and rank distributions. Beyond static classification, we introduce a framework in which LLMs refine their credibility assessments by retrieving external information, querying other models, and adapting their responses. This allows us to investigate whether their assessments reflect structured reasoning or rely primarily on prior learned associations.


The AI business model is built on hype. That's the real reason the tech bros fear DeepSeek Kenan Malik

The Guardian

No, it was not a "Sputnik moment". The launch last month of DeepSeek R1, the Chinese generative AI or chatbot, created mayhem in the tech world, with stocks plummeting and much chatter about the US losing its supremacy in AI technology. Yet, for all the disruption, the Sputnik analogy reveals less about DeepSeek than about American neuroses. The original Sputnik moment came on 4 October 1957 when the Soviet Union shocked the world by launching Sputnik 1, the first time humanity had sent a satellite into orbit. It was, to anachronistically borrow a phrase from a later and even more momentous landmark, "one giant leap for mankind", in Neil Armstrong's historic words as he took a "small step" on to the surface of the moon.


How China is using AI news anchors to deliver its propaganda

The Guardian

The news presenter has a deeply uncanny air as he delivers a partisan and pejorative message in Mandarin: Taiwan's outgoing president, Tsai Ing-wen, is as effective as limp spinach, her period in office beset by economic under performance, social problems and protests. "Water spinach looks at water spinach. Turns out that water spinach isn't just a name," says the presenter, in an extended metaphor about Tsai being "Hollow Tsai" – a pun related to the Mandarin word for water spinach. This is not a conventional broadcast journalist, even if the lack of impartiality is no longer a shock. The anchor is generated by an artificial intelligence programme, and the segment is trying, albeit clumsily, to influence the Taiwanese presidential election. The source and creator of the video are unknown, but the clip is designed to make voters doubt politicians who want Taiwan to remain at arm's length from China, which claims that the self-governing island is part of its territory.


'Inceptionism' and Balenciaga popes: a brief history of deepfakes

The Guardian

Concern about doctored or manipulative media is always high around election cycles, but 2024 will be different for two reasons: deepfakes made by artificial intelligence (AI) and the sheer number of polls. The term deepfake refers to a hoax that uses AI to create a phoney image, most commonly fake videos of people, with the effect often compounded by a voice component. Combined with the fact that around half the world's population is holding important elections this year – including India, the US, the EU and, most probably, the UK – and there is potential for the technology to be highly disruptive. Here is a guide to some of the most effective deepfakes in recent years, including the first attempts to create hoax images. The banana where it all began.


Junk websites filled with AI-generated text are pulling in money from programmatic ads

MIT Technology Review

Most companies that advertise online automatically bid on spots to run those ads through a practice called "programmatic advertising." Algorithms place ads on various websites according to complex calculations that optimize the number of eyeballs an ad might attract from the company's target audience. As a result, big brands end up paying for ad placements on websites that they may have never heard of before, with little to no human oversight. To take advantage, content farms have sprung up where low-paid humans churn out low-quality content to attract ad revenue. These types of websites already have a name: "made for advertising" sites.


Robot takeover? Not quite. Here's what AI doomsday would look like

The Guardian

Alarm over artificial intelligence has reached a fever pitch in recent months. Just this week, more than 300 industry leaders published a letter warning AI could lead to human extinction and should be considered with the seriousness of "pandemics and nuclear war". Terms like "AI doomsday" conjure up sci-fi imagery of a robot takeover, but what does such a scenario actually look like? The reality, experts say, could be more drawn out and less cinematic – not a nuclear bomb but a creeping deterioration of the foundational areas of society. "I don't think the worry is of AI turning evil or AI having some kind of malevolent desire," said Jessica Newman, director of University of California Berkeley's Artificial Intelligence Security Initiative.


Elections in UK and US at risk from AI-driven disinformation, say experts

The Guardian

Next year's elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots. Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users. "The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern," he said. "Regulation would be quite wise: people need to know if they're talking to an AI, or if content that they're looking at is generated or not. The ability to really model … to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education."