iwf
A Tipping Point in Online Child Abuse
Thousands of abusive videos were produced last year--that researchers know of. In 2025, new data show, the volume of child pornography online was likely larger than at any other point in history. A record 312,030 reports of confirmed child pornography were investigated last year by the Internet Watch Foundation, a U.K.-based organization that works around the globe to identify and remove such material from the web. This is concerning in and of itself. It means that the overall volume of child porn detected on the internet grew by 7 percent since 2024, when the previous record had been set.
- Europe > United Kingdom (0.06)
- North America > United States > California (0.05)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law > Criminal Law (0.99)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.53)
Elon Musk's Grok AI appears to have made child sexual imagery, says charity
Elon Musk's Grok AI appears to have made child sexual imagery, says charity The Internet Watch Foundation (IWF) charity says its analysts have discovered criminal imagery of girls aged between 11 and 13 which appears to have been created using Grok. The AI tool is owned by Elon Musk's firm xAI. It can be accessed either through its website and app, or through the social media platform X. The IWF said it found sexualised and topless imagery of girls on a dark web forum in which users claimed they used Grok to create the imagery. The BBC has approached X and xAI for comment.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (14 more...)
AI Generated Child Sexual Abuse Material -- What's the Harm?
Ciardha, Caoilte Ó, Buckley, John, Portnoff, Rebecca S.
The development of generative artificial intelligence (AI) tools capable of producing wholly or partially synthetic child sexual abuse material (AI CSAM) presents profound challenges for child protection, law enforcement, and societal responses to child exploitation. While some argue that the harmfulness of AI CSAM differs fundamentally from other CSAM due to a perceived absence of direct victimization, this perspective fails to account for the range of risks associated with its production and consumption. AI has been implicated in the creation of synthetic CSAM of children who have not previously been abused, the revictimization of known survivors of abuse, the facilitation of grooming, coercion and sexual extortion, and the normalization of child sexual exploitation. Additionally, AI CSAM may serve as a new or enhanced pathway into offending by lowering barriers to engagement, desensitizing users to progressively extreme content, and undermining protective factors for individuals with a sexual interest in children. This paper provides a primer on some key technologies, critically examines the harms associated with AI CSAM, and cautions against claims that it may function as a harm reduction tool, emphasizing how some appeals to harmlessness obscure its real risks and may contribute to inertia in ecosystem responses.
- South America > Brazil (0.04)
- North America > United States > Florida > Orange County (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
- Overview (1.00)
- Research Report > New Finding (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.90)
Chatbot site depicting child sexual abuse images raises fears over misuse of AI
The IWF said it had been alerted to a chatbot site that offered scenarios including'child prostitute in a hotel' and'child and teacher alone after class'. The IWF said it had been alerted to a chatbot site that offered scenarios including'child prostitute in a hotel' and'child and teacher alone after class'. A chatbot site offering explicit scenarios with preteen characters, illustrated by illegal abuse images has raised fresh fears about the misuse of artificial intelligence. A report by a child safety watchdog has triggered calls for the UK government to impose safety guidelines on AI companies, amid a surge in child sexual abuse material (CSAM) created by the technology. The Internet Watch Foundation said it had been alerted to a chatbot site that offered a number of scenarios including "child prostitute in a hotel", "sex with your child while your wife is on holiday" and "child and teacher alone after class".
- Europe > United Kingdom (1.00)
- North America > United States (0.17)
- Europe > Ukraine (0.06)
- (2 more...)
The Impact of Item-Writing Flaws on Difficulty and Discrimination in Item Response Theory
Schmucker, Robin, Moore, Steven
High-quality test items are essential for educational assessments, particularly within Item Response Theory (IRT). Traditional validation methods rely on resource-intensive pilot testing to estimate item difficulty and discrimination. More recently, Item-Writing Flaw (IWF) rubrics emerged as a domain-general approach for evaluating test items based on textual features. However, their relationship to IRT parameters remains underexplored. To address this gap, we conducted a study involving over 7,000 multiple-choice questions across various STEM subjects (e.g., math and biology). Using an automated approach, we annotated each question with a 19-criteria IWF rubric and studied relationships to data-driven IRT parameters. Our analysis revealed statistically significant links between the number of IWFs and IRT difficulty and discrimination parameters, particularly in life and physical science domains. We further observed how specific IWF criteria can impact item quality more and less severely (e.g., negative wording vs. implausible distractors). Overall, while IWFs are useful for predicting IRT parameters--particularly for screening low-difficulty MCQs--they cannot replace traditional data-driven validation methods. Our findings highlight the need for further research on domain-general evaluation rubrics and algorithms that understand domain-specific content for robust item validation.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Europe > Switzerland (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Africa > Sudan (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Instructional Material (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.68)
- Education > Curriculum > Subject-Specific Education (0.47)
- Education > Educational Setting > Online (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.67)
- (2 more...)
AI-generated child sexual abuse imagery reaching 'tipping point', says watchdog
Child sexual abuse imagery generated by artificial intelligence tools is becoming more prevalent on the open web and reaching a "tipping point", according to a safety watchdog. The Internet Watch Foundation said the amount of AI-made illegal content it had seen online over the past six months had already exceeded the total for the previous year. The organisation, which runs a UK hotline but also has a global remit, said almost all the content was found on publicly available areas of the internet and not on the dark web, which must be accessed by specialised browsers. The IWF's interim chief executive, Derek Ray-Hill, said the level of sophistication in the images indicated that the AI tools used had been trained on images and videos of real victims. "Recent months show that this problem is not going away and is in fact getting worse," he said.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.63)
An Automatic Question Usability Evaluation Toolkit
Moore, Steven, Costello, Eamon, Nguyen, Huy A., Stamper, John
Evaluating multiple-choice questions (MCQs) involves either labor intensive human assessments or automated methods that prioritize readability, often overlooking deeper question design flaws. To address this issue, we introduce the Scalable Automatic Question Usability Evaluation Toolkit (SAQUET), an open-source tool that leverages the Item-Writing Flaws (IWF) rubric for a comprehensive and automated quality evaluation of MCQs. By harnessing the latest in large language models such as GPT-4, advanced word embeddings, and Transformers designed to analyze textual complexity, SAQUET effectively pinpoints and assesses a wide array of flaws in MCQs. We first demonstrate the discrepancy between commonly used automated evaluation metrics and the human assessment of MCQ quality. Then we evaluate SAQUET on a diverse dataset of MCQs across the five domains of Chemistry, Statistics, Computer Science, Humanities, and Healthcare, showing how it effectively distinguishes between flawed and flawless questions, providing a level of analysis beyond what is achievable with traditional metrics. With an accuracy rate of over 94% in detecting the presence of flaws identified by human evaluators, our findings emphasize the limitations of existing evaluation methods and showcase potential in improving the quality of educational assessments.
- Europe > Switzerland (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (2 more...)
- Education > Educational Setting > Online (0.68)
- Education > Assessment & Standards > Student Performance (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.51)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
Paedophiles create nude AI images of children to extort them, says charity
Paedophiles are being urged to use artificial intelligence to create nude images of children to extort more extreme material from them, according to a child abuse charity. The Internet Watch Foundation (IWF) said a manual found on the dark web contained a section encouraging criminals to use "nudifying" tools to remove clothing from underwear shots sent by a child. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said. "This is the first evidence we have seen that perpetrators are advising and encouraging each other to use AI technology for these ends," said the IWF. The charity, which finds and removes child sexual abuse material online, warned last year of a rise in sextortion cases, where victims are manipulated into sending graphic images of themselves and are then threatened with the release of those images unless they hand over money.
AI-created child sexual abuse images 'threaten to overwhelm internet'
The "worst nightmares" about artificial intelligence-generated child sexual abuse images are coming true and threaten to overwhelm the internet, a safety watchdog has warned. The Internet Watch Foundation (IWF) said it had found nearly 3,000 AI-made abuse images that broke UK law. The UK-based organisation said existing images of real-life abuse victims were being built into AI models, which then produce new depictions of them. It added that the technology was also being used to create images of celebrities who have been "de-aged" and then depicted as children in sexual abuse scenarios. Other examples of child sexual abuse material (CSAM) included using AI tools to "nudify" pictures of clothed children found online.
The AI-Generated Child Abuse Nightmare Is Here
A horrific new era of ultrarealistic, AI-generated, child sexual abuse images is now underway, experts warn. Offenders are using downloadable open source generative AI models, which can produce images, to devastating effects. The technology is being used to create hundreds of new images of children who have previously been abused. Offenders are sharing datasets of abuse images that can be used to customize AI models, and they're starting to sell monthly subscriptions to AI-generated child sexual abuse material (CSAM). The details of how the technology is being abused are included in a new, wide-ranging report released by the Internet Watch Foundation (IWF), a nonprofit based in the UK that scours and removes abuse content from the web.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (1.00)