Goto

Collaborating Authors

 thorn


AI Generated Child Sexual Abuse Material -- What's the Harm?

Ciardha, Caoilte Ó, Buckley, John, Portnoff, Rebecca S.

arXiv.org Artificial Intelligence

The development of generative artificial intelligence (AI) tools capable of producing wholly or partially synthetic child sexual abuse material (AI CSAM) presents profound challenges for child protection, law enforcement, and societal responses to child exploitation. While some argue that the harmfulness of AI CSAM differs fundamentally from other CSAM due to a perceived absence of direct victimization, this perspective fails to account for the range of risks associated with its production and consumption. AI has been implicated in the creation of synthetic CSAM of children who have not previously been abused, the revictimization of known survivors of abuse, the facilitation of grooming, coercion and sexual extortion, and the normalization of child sexual exploitation. Additionally, AI CSAM may serve as a new or enhanced pathway into offending by lowering barriers to engagement, desensitizing users to progressively extreme content, and undermining protective factors for individuals with a sexual interest in children. This paper provides a primer on some key technologies, critically examines the harms associated with AI CSAM, and cautions against claims that it may function as a harm reduction tool, emphasizing how some appeals to harmlessness obscure its real risks and may contribute to inertia in ecosystem responses.


High School Is Becoming a Cesspool of Sexually Explicit Deepfakes

The Atlantic - Technology

For years now, generative AI has been used to conjure all sorts of realities--dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases--perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been. This morning, the Center for Democracy and Technology, a nonprofit that advocates for digital rights and privacy, released a report on the alarming prevalence of nonconsensual intimate imagery (or NCII) in American schools. In the past school year, the center's polling found, 15 percent of high schoolers reported hearing about a "deepfake"--or AI-generated image--that depicted someone associated with their school in a sexually explicit or intimate manner.


More than 1 in 10 students say they know of peers who created deepfake nudes, report says

Los Angeles Times

When news broke that AI-generated nude pictures of students were popping up at a Beverly Hills Middle School in February, many district officials and parents were horrified. But others said no one should have been blindsided by the spread of AI-powered "undressing" programs. "The only thing shocking about this story," one Carlsbad parent said his 14-year-old told him, "is that people are shocked." Now, a newly released report by Thorn, a tech company that works to stop the spread of child sexual abuse material, shows how common deepfake abuse has become. The proliferation coincides with the wide availability of cheap "undressing" apps and other easy-to-use, AI-powered programs to create deepfake nudes.


The world's leading AI companies pledge to protect the safety of children online

Engadget

Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech. The pledges from AI companies, Thorn said, "set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds." The goal of the initiative is to prevent the creation of sexually explicit material involving children and take it off social media platforms and search engines. More than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, Thorn says.


FBI Agents Are Using Face Recognition Without Proper Training

WIRED

The US Federal Bureau of Investigation (FBI) has done tens of thousands of face recognition searches using software from outside providers in recent years. Yet only 5 percent of the 200 agents with access to the technology have taken the bureau's three-day training course on how to use it, a report from the Government Accountability Office (GAO) this month reveals. The bureau has no policy for face recognition use in place to protect privacy, civil rights, or civil liberties. Lawmakers and others concerned about face recognition have said that adequate training on the technology and how to interpret its output is needed to reduce improper use or errors, although some experts say training can lull law enforcement and the public into thinking face recognition is low risk. Since the false arrest of Robert Williams near Detroit in 2020, multiple instances have surfaced in the US of arrests after a face recognition model wrongly identified a person.


Could AI-Generated Porn Help Protect Children?

WIRED

Now that generative AI models can produce photorealistic, fake images of child sexual abuse, regulators and child safety advocates are worried that an already-abhorrent practice will spiral further out of control. But lost in this fear is an uncomfortable possibility--that AI-generated child pornography could actually benefit society in the long run by providing a less harmful alternative to the already-massive market for images of child sexual abuse. The growing consensus among scientists is that pedophilia is biological in nature, and that keeping pedophilic urges at bay can be incredibly difficult. "What turns us on sexually, we don't decide that--we discover that," said psychiatrist Dr. Fred Berlin, director of the Johns Hopkins Sex and Gender Clinic and an expert on paraphilic disorders. "It's not because [pedophiles have] chosen to have these kinds of urges or attractions. They've discovered through no fault of their own that this is the nature of what they're afflicted with in terms of their own sexual makeup … We're talking about not giving into a craving, a craving that is rooted in biology, not unlike somebody who's having a craving for heroin."


Lilith and the Crown of Thorns by The Ghost

#artificialintelligence

Lilith and the Crown of Thorns is a piece of digital artwork by The Ghost which was uploaded on May 27th, 2022. The digital art may be purchased as wall art, home decor, apparel, phone cases, greeting cards, and more. All products are produced on-demand and shipped worldwide within 2 - 3 business days.


Introduction to Artificial Intelligence and Machine Learning

#artificialintelligence

Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we'll augment our intelligence." Every species survive on the basis of the type of intelligence they possess. Homo Sapiens are termed as the most "intelligent" species of all because their intelligence consists of many additional and more advanced qualities as compared to others. Now, when a machine is made or taught to mimic the human tendency to fulfill the above five qualities of intelligence, it is termed as Artificial Intelligence.


AI tool detects child abuse images with 99% accuracy

#artificialintelligence

A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy. The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images. According to the Internet Watch Foundation in the UK, reports of child abuse images surged 50 percent during the COVID-19 lockdown. In the 11 weeks starting on 23rd March, its hotline logged 44,809 reports of images compared with 29,698 last year. Many of these images are from children who've spent more time online and been coerced into releasing images of themselves.


Making personal data make sense with machine learning.

#artificialintelligence

As the field of big data, machine learning and artificial intelligence keep growing and revolutionizing the current world as we know it and playing a big role in determining the future, it is without doubt that certain questions are beginning to get raised in terms ethics, governance, regulations, and privacy issues surrounding the big data revolution. At a first glance, these topics can all be classified commonly as thorns in the advancement of AI and machine learning especially since most businesses are largely more curious about the business benefits of the domain and not necessarily the disadvantages as well. Recent activities and global trends are however beginning to show the negative impact that can be caused by ignoring some of these seemingly looking thorns in companies trying to make money out of data. The European Union has been an example of how governments are beginning to prioritize certain regulations that most tech companies were not paying attention to before and hence affecting their business models. Facebook's dating app which was supposed to be released today, a day before valentine, has been banned by the European Union as Facebook has failed to provide adequate and required documentation to the regulatory boards.