Goto

Collaborating Authors

 nude image


Google Messages starts rolling out sensitive content warnings for nude images

Engadget

Google Messages has started rolling out sensitive content warnings for nudity after first unveiling the feature late last year. The new feature will perform two key actions if the AI-based system detects message containing a nude image: it will blur any of those photo and trigger a warning if your child tries to open, send or forward them. Finally, it will provide resources for you and your child to get help. All detection happens on the device to ensure images and data remain private. Sensitive content warnings are enabled by default for supervised users and signed-in unsupervised teens, the company notes.


Teens are now using AI chatbots to create and spread nude images of classmates, alarming education experts

FOX News

A troubling trend has emerged in schools across the United States, with young students falling victim to the increasing use of artificial intelligence (AI)-powered "nudify" apps that have the power to create fake pornography of classmates. "Nudify" is an umbrella term referring to a plethora of widely available apps and websites that allow users to alter photos of full-dressed individuals and virtually undress them. Some apps can create nude images with just a headshot of the victim. Don Austin, the superintendent of the Palo Alto Unified School District, told Fox News Digital that this type of online harassment can be more relentless compared to traditional in-person bullying. "It used to be that a bully had to come over and push you. Palo Alto is not a community where people are going to come push anybody into a locker. But it's not immune from online bullying," Austin said.


AI-powered deepfake nude websites are targeted by San Francisco city attorney's lawsuit

Los Angeles Times

David Chiu announced Thursday that his office is suing the operators of 16 A.I.-powered "undressing" websites that help users create and distribute deepfake nude photos of women and girls. The lawsuit, which city officials said was the first of its kind, accuses the websites' operators of violating state and federal laws that ban deepfake pornography, revenge pornography and child pornography, as well as California's unfair competition law. The names of the sites were redacted in the copy of the suit made public Thursday. Chiu's office has yet to identify the owners of many of the websites, but officials say they hope to find their names and hold them accountable. Chiu said the lawsuit has two goals: shutting down these websites and sounding the alarm about this form of "sexual abuse."


OpenAI considers allowing users to create AI-generated pornography

The Guardian

OpenAI, the company behind ChatGPT, is exploring whether users should be allowed to create artificial intelligence-generated pornography and other explicit content with its products. While the company stressed that its ban on deepfakes would continue to apply to adult material, campaigners suggested the proposal undermined its mission statement to produce "safe and beneficial" AI. OpenAI, which is also the developer of the DALL-E image generator, revealed it was considering letting developers and users "responsibly" create what it termed not-safe-for-work (NSFW) content through its products. OpenAI said this could include "erotica, extreme gore, slurs, and unsolicited profanity". It said: "We're exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts … We look forward to better understanding user and societal expectations of model behaviour in this area."


Laguna Beach High School investigates 'inappropriate' AI-generated images of students

Los Angeles Times

Laguna Beach High School administrators have launched an investigation after a student allegedly created and circulated "inappropriate images" of other students using artificial intelligence. It is not clear how many students are involved in the scandal, what specifically the images contained or how they were distributed. In an email to parents on March 25, Principal Jason Allemann wrote that school leadership is "taking steps to investigate and directly address this issue with those involved, while also using this situation as a teachable moment for our students, reinforcing the importance of responsible behavior and mutual respect." The Laguna Beach Police Department is assisting with the investigation, but a department spokesperson declined to provide any details on the probe because the individuals involved are minors. The Orange County high school joins a growing number of educational institutions grappling with the use of artificial intelligence in the classroom and in social settings.


A Deepfake Nude Generator Reveals a Chilling Look at Its Victims

WIRED

As AI-powered image generators have become more accessible, so have websites that digitally remove the clothes of people in photos. One of these sites has an unsettling feature that provides a glimpse of how these apps are used: two feeds of what appear to be photos uploaded by users who want to "nudify" the subjects. The feeds of images are a shocking display of intended victims. WIRED saw some images of girls who were clearly children. Other photos showed adults and had captions indicating that they were female friends or female strangers.


Nearly 4,000 celebrities found to be victims of deepfake pornography

The Guardian

More than 250 British celebrities are among the thousands of famous people who are victims of deepfake pornography, an investigation has found. A Channel 4 News analysis of the five most visited deepfake websites found almost 4,000 famous individuals were listed, of whom 255 were British. They include female actors, TV stars, musicians and YouTubers, who have not been named, whose faces were superimposed on to pornographic material using artificial intelligence. The investigation found that the five sites received 100m views in the space of three months. The Channel 4 News presenter Cathy Newman, who was found to be among the victims, said: "It feels like a violation. It just feels really sinister that someone out there who's put this together, I can't see them, and they can see this kind of imaginary version of me, this fake version of me."


Beverly Hills school district expels 8th graders involved in fake nude scandal

Los Angeles Times

Five Beverly Hills eighth-graders have been expelled for their involvement in the creation and sharing of fake nude pictures of their classmates. The Beverly Hills Unified School District board of education voted at a special meeting Wednesday evening to approve stipulated agreements of expulsion with five students. According to a source close to the investigation, the expelled students were attending Beverly Vista Middle School. Under a stipulated agreement, the students and their parents do not contest the punishment and no hearing was held. The names of the students were not released, and the agreements are confidential.


AI-powered 'Nudify' apps that digitally undress fully-clothed teenage girls are soaring in popularity

Daily Mail - Science & tech

Tens of millions of people are using AI-powered'nudify' apps, according to a new analysis that shows the dark side of the technology. More than 24 million people visited nudity AI websites in September, which digitally alter images, primarily women, to make them appear naked in the photo using deep-learning algorithms. These algorithms are trained on existing images of women which allows it to overlay realistic images of nude body parts, regardless of whether the photographed person is clothed. Spam ads across major platforms are also directing people to the sites and apps increased by more than 2,000 percent since the beginning of 2023. The rise in nudity-promoted apps is particularly prevalent on social media, including Google's YouTube, Reddit, and X - and 52 Telegram groups were also found to be used to access non-consensual intimate imagery (NCII) services.


What is Lensa AI, the selfie filter app that has users thrilled and concerned?

#artificialintelligence

Over the past one week, Lensa AI -- an artificial intelligence-powered image filter app -- has raised a storm on social media platforms. The cause of the storm -- after AI image generation platforms such as Midjourney created noise by creating pieces of art based on a few words of text, Lensa AI has given AI in art a new spin by turning users' selfies into virtuoso works of art. However, the social media storm has also seen many raise concerns with the service -- and what it means for user privacy and data security. What is the Lensa AI app? Lensa AI is actually not a new app, but its recent spell of popularity stems from a recent update to its core technology. The app is built by Prisma Labs -- a California-based AI developer that also shot to popularity five years ago with another of its apps, called Prisma.