Goto

Collaborating Authors

 ai-generated


The Pope's Warnings About AI Were AI-Generated, a Detection Tool Claims

WIRED

The Pope's Warnings About AI Were AI-Generated, a Detection Tool Claims Pangram Labs' updated Chrome extension puts warning labels on AI slop as you scroll your social feeds. On Monday, a brand-new Reddit account popped up on the widely read forum r/AmItheAsshole, where users have their personal disputes arbitrated by strangers. This particular user asked if they had crossed a line by "refusing to babysit my stepmother's kids because I have my own job and responsibilities." The post itself was succinct, straightforward, and grammatically clean, explaining a situation in which the person's stepmother and father often expected them to provide childcare on little notice, eventually leading to an argument. "Now there's tension at home, and I'm starting to wonder if I handled it the wrong way," the redditor concluded.


This Scammer Used an AI-Generated MAGA Girl to Grift 'Super Dumb' Men

WIRED

This Scammer Used an AI-Generated MAGA Girl to Grift'Super Dumb' Men A med student says he's made thousands of dollars selling photos and videos of a young conservative woman he created using generative tools. Like many medical school students, Sam was broke. The 22-year-old aspiring orthopedic surgeon from northern India got some money from his parents, but he says he spent most of it subsidizing his licensing exams, and he's still saving up to hopefully emigrate to the US after graduation. So he started searching for ways to make additional money online. Sam, who requested a pseudonym to avoid jeopardizing his medical career and immigration status, tried a few things, with varying degrees of legitimacy and success.


DHS is using Google and Adobe AI to make videos

MIT Technology Review

Immigration agencies have been flooding social media with bizarre, seemingly AI-generated content. We now know more about what might be making it. The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. It comes as immigration agencies have flooded social media with content to support President Trump's mass deportation agenda--some of which appears to be made with AI--and as workers in tech have put pressure on their employers to denounce the agencies' activities. The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity. In a section about "editing images, videos or other public affairs materials using AI," it reveals for the first time that DHS is using Google's Veo 3 video generator and Adobe Firefly, estimating that the agency has between 100 and 1,000 licenses for the tools.


How can you tell if your new favourite artist is a real person?

BBC News

How can you tell if your new favourite artist is a real person? There's a new song doing the rounds, and in the immortal words of Kylie Minogue, you just can't get it out of your head. But what if it was created by a robot, or the artist themself is a product of artificial intelligence (AI)? Do streaming sites have an obligation to label music as AI-generated? And does it even matter, if you like what you hear?


'We could have asked ChatGPT': students fight back over course taught by AI

The Guardian

'We could have asked ChatGPT': students fight back over course taught by AI Students at the University of Staffordshire have said they feel "robbed of knowledge and enjoyment" after a course they hoped would launch their digital careers turned out to be taught in large part by AI. James and Owen were among 41 students who took a coding module at Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible". "If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course.


Imagining a future where smart glasses allow 'AI slop' to be avoided

New Scientist

Imagining a future where smart glasses allow'AI slop' to be avoided "Wearing the unsmart glasses created an entirely un-augmented reality " By the mid-2020s, the world was becoming swamped with "AI slop". Whether images, video, music, emails, ads, speeches or TV shows, many people's interactions were with asinine content generated by artificial intelligence. Sometimes the experience was fun and relatively harmless, but often it was tedious and brain-sapping . At worst, it could be dangerously misleading . Even engagements with other people became suspect - who knew if the person on the phone was real or not?


Can AI Models be Jailbroken to Phish Elderly Victims? An End-to-End Evaluation

Heiding, Fred, Lermen, Simon

arXiv.org Artificial Intelligence

We present an end-to-end demonstration of how attackers can exploit AI safety failures to harm vulnerable populations: from jailbreaking LLMs to generate phishing content, to deploying those messages against real targets, to successfully compromising elderly victims. We systematically evaluated safety guardrails across six frontier LLMs spanning four attack categories, revealing critical failures where several models exhibited near-complete susceptibility to certain attack vectors. In a human validation study with 108 senior volunteers, AI-generated phishing emails successfully compromised 11\% of participants. Our work uniquely demonstrates the complete attack pipeline targeting elderly populations, highlighting that current AI safety measures fail to protect those most vulnerable to fraud. Beyond generating phishing content, LLMs enable attackers to overcome language barriers and conduct multi-turn trust-building conversations at scale, fundamentally transforming fraud economics. While some providers report voluntary counter-abuse efforts, we argue these remain insufficient.


Everyone prefers human writers, including AI

Haverals, Wouter, Martin, Meredith

arXiv.org Artificial Intelligence

As AI writing tools become widespread, we need to understand how both humans and machines evaluate literary style, a domain where objective standards are elusive and judgments are inherently subjective. We conducted controlled experiments using Raymond Queneau's Exercises in Style (1947) to measure attribution bias across evaluators. Study 1 compared human participants (N=556) and AI models (N=13) evaluating literary passages from Queneau versus GPT-4-generated versions under three conditions: blind, accurately labeled, and counterfactually labeled. Study 2 tested bias generalization across a 14$\times$14 matrix of AI evaluators and creators. Both studies revealed systematic pro-human attribution bias. Humans showed +13.7 percentage point (pp) bias (Cohen's h = 0.28, 95% CI: 0.21-0.34), while AI models showed +34.3 percentage point bias (h = 0.70, 95% CI: 0.65-0.76), a 2.5-fold stronger effect (P$<$0.001). Study 2 confirmed this bias operates across AI architectures (+25.8pp, 95% CI: 24.1-27.6%), demonstrating that AI systems systematically devalue creative content when labeled as "AI-generated" regardless of which AI created it. We also find that attribution labels cause evaluators to invert assessment criteria, with identical features receiving opposing evaluations based solely on perceived authorship. This suggests AI models have absorbed human cultural biases against artificial creativity during training. Our study represents the first controlled comparison of attribution bias between human and artificial evaluators in aesthetic judgment, revealing that AI systems not only replicate but amplify this human tendency.


US investigators are using AI to detect child abuse images made by AI

MIT Technology Review

Though artificial intelligence is fueling a surge in synthetic child abuse images, it's also being tested as a way to stop harm to real victims. Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing. The Department of Homeland Security's Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco-based Hive AI for its software, which can identify whether a piece of content was AI-generated. The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told that he could not discuss the details of the contract, but confirmed it involves use of the company's AI detection algorithms for child sexual abuse material (CSAM). The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024.


Wired and Business Insider remove articles by AI-generated 'freelancer'

The Guardian

Multiple news organisations have taken down articles written by an alleged freelance journalist that now appear to have been generated by AI. On Thursday, Press Gazette reported that at least six publications, including Wired and Business Insider, have removed articles from their websites in recent months after it was discovered that the stories – written under the name of Margaux Blanchard – were AI-generated. Wired published a story titled "They Fell in Love Playing Minecraft. A few weeks later, the outlet took down the story, stating in an editor's note: "After an additional review of the article … Wired editorial leadership has determined this article does not meet our editorial standards." The story cited a "Jessica Hu", an alleged 34-year-old "ordained officiant based in Chicago" who reportedly "made a name for herself as a'digital celebrant', specialising in ceremonies across Twitch, Discord and VRChat", according to Press Gazette, which reviewed the Wired article. Both the Press Gazette and the Guardian were not able to verify the identity of Hu. Press Gazette further reported that in April, Business Insider published two essays by Blanchard titled: "Remote work has been the best thing for me as a parent but the worst as a person" and "I had my first kid at 45.