Goto

Collaborating Authors

 ai-generated csam


4 Arrested Over Scattered Spider Hacking Spree

WIRED

WIRED reported this week on public records that show the United States Department of Homeland Security urging local law enforcement around the country to interpret common protest activities and surrounding logistics--including riding a bike, livestreaming a police encounter, or skateboarding--as "violent tactics." The guidance could influence cops to use everyday behavior as a pretext for police action. An AI hiring bot used on the McDonald's "McHire" site exposed tens of millions of job applicants' personal data because of a group of web-based security vulnerabilities--including use of the classically guessable password "123456" on an administrator account. The site's chatbot, known as Olivia, was built by the artificial intelligence software firm Paradox.ai. Meanwhile, in the wake of last week's devastating floods in Texas that killed at least 120 people, conspiracy theories about the extreme weather event have gained enough traction among anti-government extremists, GOP influencers, and others with large platforms to produce real-world consequences like death threats.


AI-generated child sexual abuse videos surging online, watchdog says

The Guardian

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology. The Internet Watch Foundation said AI videos of abuse had "crossed the threshold" of being near-indistinguishable from "real imagery" and had sharply increased in prevalence online this year. In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year. The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material. The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.


High School Is Becoming a Cesspool of Sexually Explicit Deepfakes

The Atlantic - Technology

For years now, generative AI has been used to conjure all sorts of realities--dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases--perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been. This morning, the Center for Democracy and Technology, a nonprofit that advocates for digital rights and privacy, released a report on the alarming prevalence of nonconsensual intimate imagery (or NCII) in American schools. In the past school year, the center's polling found, 15 percent of high schoolers reported hearing about a "deepfake"--or AI-generated image--that depicted someone associated with their school in a sexually explicit or intimate manner.


AI is overpowering efforts to catch child predators, experts warn

The Guardian

The volume of sexually explicit images of children being generated by predators using artificial intelligence is overwhelming law enforcement's capabilities to identify and rescue real-life victims, child safety experts warn. Prosecutors and child safety groups working to combat crimes against children say AI-generated images have become so lifelike that in some cases it is difficult to determine whether real children have been subjected to real harms for their production. A single AI model can generate tens of thousands of new images in a short amount of time, and this content has begun to flood both the dark web and seep into the mainstream internet. "We are starting to see reports of images that are of a real child but have been AI-generated, but that child was not sexually abused. But now their face is on a child that was abused," said Kristina Korobov, senior attorney at the Zero Abuse Project, a Minnesota-based child safety non-profit.


Child predators are using AI to create sexual images of their favorite 'stars': 'My body will never be mine again'

The Guardian

Predators active on the dark web are increasingly using artificial intelligence to create sexually explicit images of children, fixating especially on "star" victims, child safety experts warn. Child safety groups tracking the activity of predators chatting in dark web forums say they are increasingly finding conversations about creating new images based on older child sexual abuse material (CSAM). Many of these predators using AI obsess over child victims referred to as "stars" in predator communities for the popularity of their images. "The communities of people who trade this material get infatuated with individual children," said Sarah Gardner, chief executive officer of the Heat Initiative, a Los Angeles non-profit focused on child protection. "They want more content of those children, which AI has now allowed them to do."


The DOJ makes its first known arrest for AI-generated CSAM

Engadget

The US Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). As far as we know, this is the first case of its kind as the DOJ looks to establish a judicial precedent that exploitative materials are still illegal even when no children were used to create them. "Put simply, CSAM generated by AI is still CSAM," Deputy Attorney General Lisa Monaco wrote in a press release. The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of "producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16."


As Tech CEOs Are Grilled Over Child Safety Online, AI Is Complicating the Issue

TIME - Tech

The CEOs of five social media companies including Meta, TikTok and X (formerly Twitter) were grilled by Senators on Wednesday about how they are preventing online child sexual exploitation. The Senate Judiciary Committee called the meeting to hold the CEOs to account for what they said was a failure to prevent the abuse of minors, and ask whether they would support the laws that members of the Committee had proposed to address the problem. It is an issue that is getting worse, according to the National Center for Missing and Exploited Children, which says reports of child sexual abuse material (CSAM) reached a record high last year of more than 36 million, as reported by the Washington Post. The National Center for Missing and Exploited Children CyberTipline, a centralized system in the U.S. for reporting online CSAM, was alerted to more than 88 million files in 2022, with almost 90% of reports coming from outside the country. Mark Zuckerberg of Meta, Shou Chew of TikTok, and Linda Yaccarino of X appeared alongside Jason Spiegel of Snap and Jason Citron of Discord to answer questions from the Senate Judiciary Committee.


AI-created child sexual abuse images 'threaten to overwhelm internet'

The Guardian

The "worst nightmares" about artificial intelligence-generated child sexual abuse images are coming true and threaten to overwhelm the internet, a safety watchdog has warned. The Internet Watch Foundation (IWF) said it had found nearly 3,000 AI-made abuse images that broke UK law. The UK-based organisation said existing images of real-life abuse victims were being built into AI models, which then produce new depictions of them. It added that the technology was also being used to create images of celebrities who have been "de-aged" and then depicted as children in sexual abuse scenarios. Other examples of child sexual abuse material (CSAM) included using AI tools to "nudify" pictures of clothed children found online.


Discord bans teen dating servers and the sharing of AI-generated CSAM

Engadget

Discord has updated its policy meant to protect children and teens on its platform after reports came out that predators have been using the app to create and spread child sexual abuse materials (CSAM), as well as to groom young teens. The platform now explicitly prohibits AI-generated photorealistic CSAM. As The Washington Post recently reported, the rise in generative AI has also led to the explosion of lifelike images with sexual depictions of children. The publication had seen conversations about the use of Midjourney -- a text-to-image generative AI on Discord -- to create inappropriate images of children. In addition to banning AI-generated CSAM, Discord now also explicitly prohibits any other kind of text or media content that sexualizes children.