nudify
Apple and Google reportedly still offer dozens of AI 'nudify' apps
Apple could unveil Gemini-powered Siri in Feb. Apple and Google reportedly still offer dozens of AI'nudify' apps Both platforms also still host the Grok app, which has been known to create nonconsensual images. A recent investigation by an online advocacy organization called the Tech Transparency Project (TTP) found that the Apple App Store and Google Play Store . These are AI applications that create nonconsensual and sexualized images, which is a clear violation of both companies' store policies. All told, the investigation found 55 of this type of app in the Google Play Store and 47 in the Apple App Store.
- Leisure & Entertainment (0.31)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.31)
- Law (0.31)
- Information Technology > Communications > Mobile (0.55)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.53)
Deepfake 'Nudify' Technology Is Getting Darker--and More Dangerous
Sexual deepfakes continue to get more sophisticated, capable, easy to access, and perilous for millions of women who are abused with the technology. Open the website of one explicit deepfake generator and you'll be presented with a menu of horrors. With just a couple of clicks, it offers you the ability to convert a single photo into an eight-second explicit videoclip, inserting women into realistic-looking graphic sexual situations. "Transform any photo into a nude version with our advanced AI technology," text on the website says. The options for potential abuse are extensive.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Massachusetts (0.04)
- North America > United States > California (0.04)
- (4 more...)
U.K. Cracks Down on AI 'Nudify' Tech, Announces Investigation Into X
In this photo illustration, a screen displays examples of AI prompt-created videos, made with Xai's Grok app, on January 12, 2026 in London, England. In this photo illustration, a screen displays examples of AI prompt-created videos, made with Xai's Grok app, on January 12, 2026 in London, England. The United Kingdom plans to bring into force a law that criminalizes the creation of non-consensual sexualized images, including through Grok, the chatbot within Elon Musk's X application, following the app's deepfake scandal of the last few weeks. "This means individuals are committing a criminal offence if they create--or seek to create--such content--including on X--and anyone who does this should expect to face the full extent of the law," Technology Secretary Liz Kendal announced in the House of Commons Monday, adding that the government would work to also make it illegal for companies to supply the tools designed to create these nonconsensual images. The move came just hours after the Office of Communications (Ofcom)--the country's independent regulator for the communications industry--announced that it will be investigating X and the thousands of pornographic images generated by Grok that flooded the app, including sexualized images of what appear to be minors.
- Europe > United Kingdom > England > Greater London > London (0.46)
- North America > United States (0.30)
- Europe > France (0.06)
- (5 more...)
- Law (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (1.00)
Why Are Grok and X Still Available in App Stores?
Why Are Grok and X Still Available in App Stores? Elon Musk's chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other "nudify" apps--but continue to host X and Grok. Elon Musk's AI chatbot Grok is being used to flood X with thousands of sexualized images of adults and apparent minors wearing minimal clothing. Some of this content appears to not only violate X's own policies, which prohibit sharing illegal content such as child sexual abuse material (CSAM), but may also violate the guidelines of Apple's App Store and the Google Play store.
- North America > United States > California (0.15)
- South America > Venezuela (0.05)
- Europe > Slovakia (0.05)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.70)
Australia clamps downs on 'nudify' sites used for AI-generated child abuse
Australia clamps downs on'nudify' sites used for AI-generated child abuse Internet users in Australia have been blocked from accessing several websites that used artificial intelligence to create child sexual exploitation material, the country's internet regulator has announced. The three "nudify" sites withdrew from Australia following an official warning, eSafety Commissioner Julie Inman Grant said on Thursday. Grant said such "nudify" services, which allow users to make images of real people appear naked using AI, have had a "devastating" effect in Australian schools. "We took enforcement action in September because this provider failed to put in safeguards to prevent its services being used to create child sexual exploitation material and were even marketing features like undressing'any girl,' and with options for'schoolgirl' image generation and features such as'sex mode,'" Grand said in a statement. The development comes after Grant's office issued a formal warning to the United Kingdom-based company behind the sites in September, threatening civil penalties of up to 49.5 million Australian dollars ($32.2m) if it did not introduce safeguards to prevent image-based abuse.
- Oceania > Australia (1.00)
- Europe > United Kingdom (0.26)
- North America > Canada (0.07)
- (9 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.62)
Australia moves to stamp out 'nudify' and stalking apps
Australia has announced plans to ban apps used for stalking and creating deepfake nudes. Tech platforms will be responsible for preventing access to "nudify" and undetectable online stalking tools under the reforms announced on Tuesday by the Australian government. Minister for Communications Anika Wells said Australia would work with firms to stamp out "abhorrent technologies" while ensuring "legitimate and consent-based" artificial intelligence (AI) and online tracking services were not adversely affected. "Abusive technologies are widely and easily accessible and are causing real and irreparable damage now," Wells said in a statement. "These new, evolving, technologies require a new, proactive, approach to harm prevention – and we'll work closely with industry to achieve this." "While this move won't eliminate the problem of abusive technology in one fell swoop, alongside existing laws and our world-leading online safety reforms, it will make a real difference in protecting Australians," she added.
- Oceania > Australia (1.00)
- North America > United States (0.07)
AI 'Nudify' Websites Are Raking in Millions of Dollars
For years, so-called "nudify" apps and websites have mushroomed online, allowing people to create nonconsensual and abusive images of women and girls, including child sexual abuse material. Despite some lawmakers and tech companies taking steps to limit the harmful services, every month, millions of people are still accessing the websites, and the sites' creators may be making millions of dollars each year, new research suggests. An analysis of 85 nudify and "undress" websites--which allow people to upload photos and use AI to generate "nude" pictures of the subjects with just a few clicks--has found that most of the sites rely on tech services from Google, Amazon, and Cloudflare to operate and stay online. The findings, revealed by Indicator, a publication investigating digital deception, say that the websites had a combined average of 18.5 million visitors for each of the past six months and collectively may be making up to 36 million per year. Alexios Mantzarlis, a cofounder of Indicator and an online safety researcher, says the murky nudifier ecosystem has become a "lucrative business" that "Silicon Valley's laissez-faire approach to generative AI" has allowed to persist.
- Information Technology (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.92)
- Law (0.92)
Millions of People Are Using Abusive AI 'Nudify' Bots on Telegram
In early 2020, deepfake expert Henry Ajder uncovered one of the first Telegram bots built to "undress" photos of women using artificial intelligence. At the time, Ajder recalls, the bot had been used to generate more than 100,000 explicit photos--including those of children--and its development marked a "watershed" moment for the horrors deepfakes could create. Since then, deepfakes have become more prevalent, more damaging, and easier to produce. Now, a WIRED review of Telegram communities involved with the explicit nonconsensual content has identified at least 50 bots that claim to create explicit photos or videos of people with only a couple of clicks. The bots vary in capabilities, with many suggesting they can "remove clothes" from photos while others claim to create images depicting people in various sexual acts.
- Europe > Italy (0.06)
- Asia > South Korea (0.06)
AI-powered deepfake nude websites are targeted by San Francisco city attorney's lawsuit
David Chiu announced Thursday that his office is suing the operators of 16 A.I.-powered "undressing" websites that help users create and distribute deepfake nude photos of women and girls. The lawsuit, which city officials said was the first of its kind, accuses the websites' operators of violating state and federal laws that ban deepfake pornography, revenge pornography and child pornography, as well as California's unfair competition law. The names of the sites were redacted in the copy of the suit made public Thursday. Chiu's office has yet to identify the owners of many of the websites, but officials say they hope to find their names and hold them accountable. Chiu said the lawsuit has two goals: shutting down these websites and sounding the alarm about this form of "sexual abuse."
- North America > United States > California > San Francisco County > San Francisco (0.44)
- North America > United States > California > Los Angeles County > Beverly Hills (0.08)
- North America > United States > New Jersey (0.06)
- Law > Litigation (1.00)
- Information Technology > Security & Privacy (0.91)
- Law > Government & the Courts (0.75)
- (2 more...)
AI-powered 'Nudify' apps that digitally undress fully-clothed teenage girls are soaring in popularity
Tens of millions of people are using AI-powered'nudify' apps, according to a new analysis that shows the dark side of the technology. More than 24 million people visited nudity AI websites in September, which digitally alter images, primarily women, to make them appear naked in the photo using deep-learning algorithms. These algorithms are trained on existing images of women which allows it to overlay realistic images of nude body parts, regardless of whether the photographed person is clothed. Spam ads across major platforms are also directing people to the sites and apps increased by more than 2,000 percent since the beginning of 2023. The rise in nudity-promoted apps is particularly prevalent on social media, including Google's YouTube, Reddit, and X - and 52 Telegram groups were also found to be used to access non-consensual intimate imagery (NCII) services.
- North America > United States > Washington > King County > Seattle (0.05)
- North America > United States > New Jersey (0.05)
- Europe > Spain (0.05)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.75)