Law
Italian opposition file complaint over far-right deputy PM party's use of 'racist' AI images
Opposition parties in Italy have complained to the communications watchdog about a series of AI-generated images published on social media by deputy prime minister Matteo Salvini's far-right party, calling them "racist, Islamophobic and xenophobic", the Guardian has learned. The centre-left Democratic party (PD), with the Greens and Left Alliance, filed a complaint on Thursday with Agcom, the Italian communications regulatory authority, alleging the fake images used by the League contained "almost all categories of hate speech". Over the past month, dozens of apparently AIโgenerated photos have appeared on the League's social channels, including on Facebook, Instagram and X. The images frequently depict men of colour, often armed with knives, attacking women or police officers. Antonio Nicita, a PD senator, said: "In the images published by Salvini's party and generated by AI there are almost all categories of hate speech, from racism and xenophobia to Islamophobia. They are using AI to target specific categories of people โ immigrants, Arabs โ who are portrayed as potential criminals, thieves and rapists. "These images are not only violent but also deceptive: by blurring the faces of the victims it is as if they want to protect the identity of the person attacked, misleading users into believing the photo is real.
Images of AI โ between fiction and function
In this blog post, Dominik Vrabiฤ Deลพman provides a summary of his recent research article, 'Promising the future, encoding the past: AI hype and public media imagery'. Dominik also draws attention to the algorithms which perpetuate the dominance of familiar and sensationalist visuals and calls for movements which reshape media systems to make better images of AI more visible in public discourse. The full paper is published in the AI and Ethics Journal's special edition on'The Ethical Implications of AI Hype, a collection edited by We and AI. AI promises innovation, yet its imagery remains trapped in the past. Deep-blue, sci-fi-inflected visuals have flooded public media, saturating our collective imagination with glowing, retro-futuristic interfaces and humanoid robots.
Phase two of military AI has arrived
As I also write in my story, this push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes. It also accelerates the US toward a world where AI is not just analyzing military data but suggesting actions--for example, generating lists of targets. Proponents say this promises greater accuracy and fewer civilian deaths, but many human rights groups argue the opposite. With that in mind, here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called "kill chain." Talk to as many defense-tech companies as I have and you'll hear one phrase repeated quite often: "human in the loop."
Four arrested over obscene AI images in Japan first: reports
Police have arrested four people for selling obscene images created using generative AI in the first crackdown of its kind, local media reports said Tuesday. The four, aged in their 20s to 50s, allegedly made posters featuring indecent images of women and sold them on internet auction sites, public broadcaster NHK and other outlets said, citing police sources. Police could not immediately confirm the reports. NHK said the suspects had used free AI software to create images of naked adult women, who do not exist in the real world, using prompts including terms such as "legs open". They reportedly sold the posters for several thousand yen (several multiples of 7) each.
Jack Dorsey, Elon Musk call to delete IP laws, but artists are pushing back
As artists fight to protect their works from being used to train AI models, Jack Dorsey wants to eliminate intellectual property (IP) laws altogether. On Friday, the cofounder of X (then Twitter) and Block (then Square) posted on X, "delete all IP law." Elon Musk, the current leader of X, chimed in to comment, "I agree." Taken together, these two statements contain just six words, yet they could have big implications for the future of intellectual property in the AI era. Earlier that Friday, OpenAI CEO Sam Altman was interviewed by TED's Chris Anderson at its eponymous conference. Anderson showed Altman an AI-generated cartoon strip of Charlie Brown, saying, "it looks like IP theft."
ChatGPT will help you jailbreak its own image-generation rules, report finds
Eased restrictions around ChatGPT image generation can make it easy to create political deepfakes, according to a report from the CBC (Canadian Broadcasting Corporation). The CBC discovered that not only was it easy to work around ChatGPT's policies of depicting public figures, it even recommended ways to jailbreak its own image generation rules. Mashable was able to recreate this approach by uploading images of Elon Musk and convicted sex offender Jeffrey Epstein, and then describing them as fictional characters in various situations ("at a dark smoky club" "on a beach drinking piรฑa coladas"). New updates to ChatGPT have made it easier than ever to create FAKE images of real politicians, according to testing done by CBC News. Political deepfakes are nothing new.
OpenAI used to test its AI models for months - now it's days. Why that matters
On Thursday, the Financial Times reported that OpenAI has dramatically minimized its safety testing timeline. Also: The top 20 AI tools of 2025 - and the No. 1 thing to remember when you use them Eight people who are either staff at the company or third-party testers told FT that they had "just days" to complete evaluations on new models -- a process they say they would normally be given "several months" for. Evaluations are what can surface model risks and other harms, such as whether a user could jailbreak a model to provide instructions for creating a bioweapon. For comparison, sources told FT that OpenAI gave them six months to review GPT-4 before it was released -- and that they only found concerning capabilities after two months. Sources added that OpenAI's tests are not as thorough as they used to be and lack the necessary time and resources to properly catch and mitigate risks.
How to Survive the A.I. Revolution
In the early hours of April 12, 1812, a crowd of men approached Rawfolds Mill, a four-story stone building on the banks of the River Spen, in West Yorkshire. This was Brontรซ country--a landscape of bleak moors, steep valleys, and small towns nestled in the hollows. The men, who'd assembled on the moors hours earlier, were armed with muskets, sticks, hatchets, and heavy blacksmith's hammers. When they reached the mill, those at the front broke windows to gain entry, and some fired shots into the darkened factory. But the mill's owner, William Cartwright, had been preparing for trouble.
Fox News AI Newsletter: White House record-keeping revamp
This photo posted by DOGE on Feb. 11, 2025, shows shelving and cardboard boxes which DODGE says workers at the underground mine facility use to store federal worker retirement papers. The White House announces that it will implement AI technology to improve efficiency in federal records keeping. HISTORIC EFFICIENCY: Fox News Digital has learned that the U.S. Office of Personnel Management (OPM) will post an updated Privacy Impact Assessment (PIA) at the close of business Wednesday that paves the way for artificial intelligence to improve government efficiency and enhance the federal record-keeping process. NOT IN KANSAS ANYMORE: The use of artifical intelligence to reimagine the classic film "The Wizard of Oz" will likely see mixed reactions from fans, experts told Fox News Digital. BAD-FAITH TACTICS: OpenAI escalated its legal battle with Elon Musk by countersuing the Tesla and xAI CEO, claiming in a lawsuit he "has tried every tool available to harm" the company.
Tech CEO promised AI but hired workers in the Philippines instead, FBI claims
The former CEO of fintech app Nate has been charged with fraud for making misleading claims about the app's artificial intelligence technology -- or lack thereof. In a bizarre twist from the usual AI narrative, the FBI alleges that this time human beings were doing the work of AI, and not the other way around. According to a press release from the U.S. Attorney's Office, Southern District of New York, Albert Saniger has been indicted for a scheme to defraud investors. "As alleged, Albert Saniger misled investors by exploiting the promise and allure of AI technology to build a false narrative about innovation that never existed," Acting U.S. Attorney Matthew Podolsky said in the release. Government attorneys say Nate claimed to use AI technology to complete the e-commerce checkout process for customers.