Goto

Collaborating Authors

 ai content


Equity threatens mass direct action over use of actors' images in AI content

The Guardian

Equity confirmed it was supporting a Scottish actor who believes her image was used in the creation of Tilly Norwood (above), an AI-generated'actor'. Equity confirmed it was supporting a Scottish actor who believes her image was used in the creation of Tilly Norwood (above), an AI-generated'actor'. Equity threatens mass direct action over use of actors' images in AI content The performing arts union Equity has threatened mass direct action over tech and entertainment companies' use of its members' likenesses, images and voices in AI content without permission. Its general secretary, Paul W Fleming, said it planned to coordinate data requests en masse to companies to force them to disclose whether they used members' data in AI-generated material without consent. Last week the union confirmed its was supporting a Scottish actor who believes her image was used in the creation of the "AI actor" Tilly Norwood, which has been widely condemned by the film industry.


Everyone prefers human writers, including AI

Haverals, Wouter, Martin, Meredith

arXiv.org Artificial Intelligence

As AI writing tools become widespread, we need to understand how both humans and machines evaluate literary style, a domain where objective standards are elusive and judgments are inherently subjective. We conducted controlled experiments using Raymond Queneau's Exercises in Style (1947) to measure attribution bias across evaluators. Study 1 compared human participants (N=556) and AI models (N=13) evaluating literary passages from Queneau versus GPT-4-generated versions under three conditions: blind, accurately labeled, and counterfactually labeled. Study 2 tested bias generalization across a 14$\times$14 matrix of AI evaluators and creators. Both studies revealed systematic pro-human attribution bias. Humans showed +13.7 percentage point (pp) bias (Cohen's h = 0.28, 95% CI: 0.21-0.34), while AI models showed +34.3 percentage point bias (h = 0.70, 95% CI: 0.65-0.76), a 2.5-fold stronger effect (P$<$0.001). Study 2 confirmed this bias operates across AI architectures (+25.8pp, 95% CI: 24.1-27.6%), demonstrating that AI systems systematically devalue creative content when labeled as "AI-generated" regardless of which AI created it. We also find that attribution labels cause evaluators to invert assessment criteria, with identical features receiving opposing evaluations based solely on perceived authorship. This suggests AI models have absorbed human cultural biases against artificial creativity during training. Our study represents the first controlled comparison of attribution bias between human and artificial evaluators in aesthetic judgment, revealing that AI systems not only replicate but amplify this human tendency.


RedNote-Vibe: A Dataset for Capturing Temporal Dynamics of AI-Generated Text in Social Media

Li, Yudong, Sun, Yufei, Yao, Yuhan, Yang, Peiru, Li, Wanyue, Zou, Jiajun, Huang, Yongfeng, Shen, Linlin

arXiv.org Artificial Intelligence

The proliferation of Large Language Models (LLMs) has led to widespread AI-Generated Text (AIGT) on social media platforms, creating unique challenges where content dynamics are driven by user engagement and evolve over time. However, existing datasets mainly depict static AIGT detection. In this work, we introduce RedNote-Vibe, the first longitudinal (5-years) dataset for social media AIGT analysis. This dataset is sourced from Xiaohongshu platform, containing user engagement metrics (e.g., likes, comments) and timestamps spanning from the pre-LLM period to July 2025, which enables research into the temporal dynamics and user interaction patterns of AIGT. Furthermore, to detect AIGT in the context of social media, we propose PsychoLinguistic AIGT Detection Framework (PLAD), an interpretable approach that leverages psycholinguistic features. Our experiments show that PLAD achieves superior detection performance and provides insights into the signatures distinguishing human and AI-generated content. More importantly, it reveals the complex relationship between these linguistic features and social media engagement. The dataset is available at https://github.com/testuser03158/RedNote-Vibe.


Cat soap operas and babies trapped in space: the 'AI slop' taking over YouTube

The Guardian

Babies trapped in space, zombie football stars and cat soap operas: welcome to YouTube in the era of AI video. Nearly one in 10 of the fastest growing YouTube channels globally are showing AI-generated content only, as breakthroughs in the technology spur a flood of artificial content. Guardian analysis of data from the analytics firm Playboard shows that out of the top 100 fastest growing channels in July this year, nine were showing purely AI-generated content. The offerings include channels featuring bizarre narratives such as a baby crawling into a pre-launch space rocket, an undead Cristiano Ronaldo and melodramas featuring humanised cats. AI video generation has surged amid the release of powerful tools such as Google's Veo 3 and Elon Musk's Grok Imagine.


From Sensual Butt Songs to Santa's Alleged Coke Habit: AI Slop Music Is Getting Harder to Avoid

WIRED

AI slop is flooding every single digital platform, and music streaming services are no exception--so much so, even someone who generally avoids AI might find themselves unknowingly listening to a robot hornily singing about butts. Take the sordid saga of "Make Love to My Shitter," an AI-generated track from an artist called BannedVinylCollection. Brace Belden, a host of the popular politics podcast TrueAnon, says that Spotify recently queued up the bawdy song after he'd finished listening to alt-country legend Lucinda Williams' 1992 album Sweet Old World. "I didn't realize the song was AI at first," he says. "I thought it might've been some obscene joke record from the 80s or 90s."


James Bulger's mum seeks AI law to curb clips of murder victims

BBC News

There were plans to include measures to force social media companies to remove some "legal-but-harmful" content in the Online Safety Act, before it became law. But the proposals were scrapped over censorship concerns. Online safety campaigners argue the rules around removing harmful content needed tightening to close loopholes in the act. In January this year, Technology Secretary Peter Kyle told the BBC he had "inherited an unsatisfactory legislative settlement" in the Online Safety Act. "I'm very open-minded and I've said publicly, I think we'll have to legislate into the future again," Kyle said.


Meta says AI-generated content was less than 1 precent of election misinformation

Engadget

AI-generated content played a much smaller role in global election misinformation than what many officials and researchers had feared, according to a new analysis from Meta. In an update on its efforts to safeguard dozens of elections in 2024, the company said that AI content made up only a fraction of election-related misinformation that was caught and labeled by its fact checkers. "During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation," the company shared in a blog post, referring to elections in the US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU's Parliamentary elections. The update comes after numerous government officials and researchers for months raised the alarm about the role generative AI could play in supercharging election misinformation in a year when more than 2 billion people were expected to go to the polls. But those fears largely did not play out -- at least on Meta's platforms -- according to the company's President of Global Affairs, Nick Clegg.


How not to get bamboozled by AI content on the web

PCWorld

Nowadays, it's easy to get fooled by AI content on the web. Whether it's a picture of the Pope sporting a puffy Balenciaga jacket or Trump getting tackled and arrested, these AI-generated images appear super realistic (as long as you're not looking too close), so it can be hard to separate fact from fiction. This is because AI doesn't really understand context in the cultural or historical sense. While some AI-generated images are harmless and do not spread misinformation, others, especially ones involving celebrities or politicians, can cause a great deal of damage and brain rot. Heck, I consider myself to be a relatively tech-savvy person and even I've been fooled once or twice.


'Unjust threat': Murdoch and artists align in fight over AI content scraping

The Guardian

It is an unlikely alliance: the billionaire media mogul Rupert Murdoch and a panoply of leading artists including the Radiohead singer, Thom Yorke, the actors Kevin Bacon and Julianne Moore, and the author Kazuo Ishiguro. This week, they began two very public fights with artificial intelligence companies, accusing them of using their intellectual property without permission to build the increasingly powerful and lucrative new technology. More than 13,000 creative professionals from the worlds of literature, music, film, theatre and television released a statement warning that AI firms training programs such as ChatGPT on their works without a licence posed a "major, unjust threat" to their livelihoods. By the end of the week that number had almost doubled to 25,000. It came a day after Murdoch, owner of the publishing group News Corp, whose newspapers include the Wall Street Journal, the Sun, the Times and the Australian, launched a legal action against the AI-powered search engine Perplexity, accusing it of "illegally copying" some of his US titles' journalism.


Russia produced most AI content to sway election: U.S. intelligence

The Japan Times

Russia has generated more artificial intelligence content to influence the U.S. presidential election than any other foreign power as part of its broader effort to boost Republican candidate Donald Trump over Democrat Kamala Harris, a U.S. intelligence official said on Monday. The official from the Office of the Director of National Intelligence (ODNI), speaking on condition of anonymity, made the comment in a briefing to reporters on the alleged use of AI by Russia and other countries to influence the Nov. 5 vote. AI content Moscow produced is "consistent with Russia's broader efforts to boost the former president's (Trump) candidacy and denigrate the vice president (Harris) and the Democratic Party, including through conspiratorial narratives," he said.