csam
Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from
Amazon discovered a'high volume' of CSAM in its AI training data but isn't saying where it came from The company's reports were inactionable, according to a child safety organization. The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The vast majority of that content was reported by Amazon, which found the material in its training data, according to an investigation by . In addition, Amazon said only that it obtained the inappropriate content from external sources used to train its AI services and claimed it could not provide any further details about where the CSAM came from. This is really an outlier, Fallon McNulty, executive director of NCMEC's CyberTipline, told . The CyberTipline is where many types of US-based companies are legally required to report suspected CSAM.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.75)
- Law (0.75)
- Marketing (0.54)
Elon Musk's Pornography Machine
On X, sexual harassment and perhaps even child abuse are the latest memes. Earlier this week, some people on X began replying to photos with a very specific kind of request. "Put her in a bikini," "take her dress off," "spread her legs," and so on, they commanded Grok, the platform's built-in chatbot. Again and again, the bot complied, using photos of real people--celebrities and noncelebrities, including some who appear to be young children--and putting them in bikinis, revealing underwear, or sexual poses. By one estimate, Grok generated one nonconsensual sexual image every minute in a roughly 24-hour stretch.
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.89)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.35)
OpenAI's Child Exploitation Reports Increased Sharply This Year
OpenAI's Child Exploitation Reports Increased Sharply This Year The company made 80 times as many reports to the National Center for Missing & Exploited Children during the first six months of 2025 as it did in the same period a year prior. OpenAI sent 80 times as many child exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 as it did during a similar time period in 2024, according to a recent update from the company. The NCMEC's CyberTipline is a Congressionally authorized clearinghouse for reporting child sexual abuse material (CSAM) and other forms of child exploitation. Companies are required by law to report apparent child exploitation to the CyberTipline. When a company sends a report, NCMEC reviews it and then forwards it to the appropriate law enforcement agency for investigation.
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
UK to ban deepfake AI 'nudification' apps
The UK government says it will ban so-called nudification apps as part of efforts to tackle misogyny online. New laws - announced on Thursday as part of a wider strategy to halve violence against women and girls - will make it illegal to create and supply AI tools letting users edit images to seemingly remove someone's clothing. The new offences would build on existing rules around sexually explicit deepfakes and intimate image abuse, the government said. Women and girls deserve to be safe online as well as offline, said Technology Secretary Liz Kendall. We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (15 more...)
- Law (1.00)
- Information Technology > Security & Privacy (0.95)
- Government > Regional Government > Europe Government > United Kingdom Government (0.69)
AI Generated Child Sexual Abuse Material -- What's the Harm?
Ciardha, Caoilte Ó, Buckley, John, Portnoff, Rebecca S.
The development of generative artificial intelligence (AI) tools capable of producing wholly or partially synthetic child sexual abuse material (AI CSAM) presents profound challenges for child protection, law enforcement, and societal responses to child exploitation. While some argue that the harmfulness of AI CSAM differs fundamentally from other CSAM due to a perceived absence of direct victimization, this perspective fails to account for the range of risks associated with its production and consumption. AI has been implicated in the creation of synthetic CSAM of children who have not previously been abused, the revictimization of known survivors of abuse, the facilitation of grooming, coercion and sexual extortion, and the normalization of child sexual exploitation. Additionally, AI CSAM may serve as a new or enhanced pathway into offending by lowering barriers to engagement, desensitizing users to progressively extreme content, and undermining protective factors for individuals with a sexual interest in children. This paper provides a primer on some key technologies, critically examines the harms associated with AI CSAM, and cautions against claims that it may function as a harm reduction tool, emphasizing how some appeals to harmlessness obscure its real risks and may contribute to inertia in ecosystem responses.
- South America > Brazil (0.04)
- North America > United States > Florida > Orange County (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
- Overview (1.00)
- Research Report > New Finding (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.90)
US investigators are using AI to detect child abuse images made by AI
Though artificial intelligence is fueling a surge in synthetic child abuse images, it's also being tested as a way to stop harm to real victims. Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing. The Department of Homeland Security's Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco-based Hive AI for its software, which can identify whether a piece of content was AI-generated. The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told that he could not discuss the details of the contract, but confirmed it involves use of the company's AI detection algorithms for child sexual abuse material (CSAM). The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > Massachusetts (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
An AI Image Generator's Exposed Database Reveals What People Really Used It For
Tens of thousands of explicit AI-generated images, including AI-generated child sexual abuse material, were left open and accessible to anyone on the internet, according to new research seen by WIRED. An open database belonging to an AI image-generation firm contained more than 95,000 records, including some prompt data and images of celebrities such as Ariana Grande, the Kardashians, and Beyoncé de-aged to look like children. The exposed database, which was discovered by security researcher Jeremiah Fowler, who shared details of the leak with WIRED, is linked to South Korea–based website GenNomis. The website and its parent company, AI-Nomis, hosted a number of image generation and chatbot tools for people to use. More than 45 GB of data, mostly made up of AI images, was left in the open.
- Asia > South Korea (0.26)
- Europe > United Kingdom (0.06)
- Law (0.77)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.61)
- Media > Music (0.57)
- (2 more...)
Unveiling AI's Threats to Child Protection: Regulatory efforts to Criminalize AI-Generated CSAM and Emerging Children's Rights Violations
Kokolaki, Emmanouela, Fragopoulou, Paraskevi
This paper aims to present new alarming trends in the field of child sexual abuse through imagery, as part of SafeLine's research activities in the field of cybercrime, child sexual abuse material and the protection of children's rights to safe online experiences. It focuses primarily on the phenomenon of AI-generated CSAM, sophisticated ways employed for its production which are discussed in dark web forums and the crucial role that the open-source AI models play in the evolution of this overwhelming phenomenon. The paper's main contribution is a correlation analysis between the hotline's reports and domain names identified in dark web forums, where users' discussions focus on exchanging information specifically related to the generation of AI-CSAM. The objective was to reveal the close connection of clear net and dark web content, which was accomplished through the use of the ATLAS dataset of the Voyager system. Furthermore, through the analysis of a set of posts' content drilled from the above dataset, valuable conclusions on forum members' techniques employed for the production of AI-generated CSAM are also drawn, while users' views on this type of content and routes followed in order to overcome technological barriers set with the aim of preventing malicious purposes are also presented. As the ultimate contribution of this research, an overview of the current legislative developments in all country members of the INHOPE organization and the issues arising in the process of regulating the AI- CSAM is presented, shedding light in the legal challenges regarding the regulation and limitation of the phenomenon.
- Oceania > Australia (0.68)
- Oceania > New Zealand (0.46)
- Asia > Russia (0.46)
- (45 more...)
Roblox, Discord, OpenAI and Google found new child safety group
Roblox, Discord, OpenAI and Google are launching a nonprofit organization called ROOST, or Robust Open Online Safety Tools, which hopes "to build scalable, interoperable safety infrastructure suited for the AI era." The organization plans on providing free, open-source safety tools to public and private organizations to use on their own platforms, with a special focus on child safety to start. The press release announcing ROOST specifically calls out plans to offer "tools to detect, review, and report child sexual abuse material (CSAM)." Partner companies are providing funding for these tools, and the technical expertise to build them, too. The operating theory of ROOST is that access to generative AI is rapidly changing the online landscape, making the need for "reliable and accessible safety infrastructure" all the more urgent.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.60)
- Law (0.60)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.99)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.66)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.66)
A predator used her 12-year-old face to make porn. She helped pass a law to make that a crime
Last year, Kaylin Hayman walked into a Pittsburgh court to testify against a man she'd never met who had used her face to make pornographic pictures with artificial intelligence technology. Kaylin, 16, is a child actress who starred in the Disney show Just Roll With It from 2019 to 2021. The perpetrator, a 57-year-old man named James Smelko, had targeted her because of her public profile. She is one of about 40 of his victims, all of them child actors. In one of the images of Kaylin submitted into evidence at the trial, Smelko used her face from a photo posted on Instagram when she was 12, working on set, and superimposed it onto the naked body of someone else.
- Oceania > Australia (0.05)
- North America > United States > Pennsylvania (0.05)
- North America > United States > California > Ventura County > Ventura (0.05)
- Europe > United Kingdom (0.05)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.53)