Goto

Collaborating Authors

 ai model


Government backtracks on AI and copyright after outcry from major artists

BBC News

We have listened, Technology Secretary Liz Kendall said on Wednesday, saying the government no longer favours that approach. However, the government's position is now unclear, saying it no longer has a preferred option for what to do next. Kendall said the government had engaged extensively with people in the creative and AI industries. It is attempting to balance the interests of the two sectors by giving creatives control how their work is used, while recognising AI models need to be trained on work such as writing, music and video. In a report published on Wednesday, the government said there was no consensus on how these objectives should be achieved.


The Download: The Pentagon's new AI plans, and next-gen nuclear reactors

MIT Technology Review

The Download: The Pentagon's new AI plans, and next-gen nuclear reactors Plus: The OpenClaw frenzy has led to a new Nvidia product. The Pentagon plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing them to train on and learn from classified data is a major new development that presents unique security risks. It would also bring AI firms closer to classified data than ever before. What do new nuclear reactors mean for waste?


The Defense Department reportedly plans to train AI models on classified military data

Engadget

The models will be separate versions of models specifically for military use. The Pentagon is making plans to have AI companies train versions of their models specifically for military use on classified information, according to the . If true, it wouldn't come as a surprise, seeing as the US is aiming to become an "AI-first warfighting force, based on the statement [PDF] released by Secretary of Defense Pete Hegseth earlier this year. The department is already using AI models in the military: For instance, the US reportedly used Anthropic's Claude to help with the capture of Venezuelan President Nicolás Maduro and with its attack on Iran, even after President Trump ordered federal agencies to ban its technology. But models trained on actual classified data could give more accurate and detailed responses, say, for situations similar to what happened in the past that aren't public information.


Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems

WIRED

Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems In response to Anthropic's lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military. The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department.


The Pentagon is planning for AI companies to train on classified data, defense official says

MIT Technology Review

The generative AI models used in classified environments can answer questions but don't currently learn from the data they see. The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with .


An AI image generator for non-English speakers

AIHub

Although text-to-image generation is rapidly advancing, these AI models are mostly English-centric. Researchers at the University of Amsterdam Faculty of Science have created NeoBabel, an AI image generator that can work in six different languages. By making all elements of their research open source, anyone can build on the model and help push inclusive AI research. When you generate an image with AI, the results are often better when your prompt is in English. This is because many AI models are English at their core: if you use another language, your prompt is translated into English before the image is created.


Where OpenAI's technology could show up in Iran

MIT Technology Review

Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).



'100 Video Calls Per Day': Models Are Applying to Be the Face of AI Scams

WIRED

'100 Video Calls Per Day': Models Are Applying to Be the Face of AI Scams Dozens of Telegram channels reviewed by WIRED include job listings for "AI face models." The (mostly) women who land these gigs are likely being used to dupe victims out of their money. "I can speak fluent English, I can speak good Chinese, I also speak Russian and Turkish," the glamorous, 24-year-old Uzbekistani woman explains in a selfie-style video made for recruiters. Angel had arrived in the Cambodian city of Sihanoukville that day, she said, and was ready to start work immediately. Those impressive language skills, however, have likely been put to use as part of elaborate " pig-butchering " scams targeting Americans.


Compare top AI models with this 79 lifetime license

PCWorld

When you purchase through links in our articles, we may earn a small commission. ChatPlayground AI lets you run a single prompt across multiple top AI models and compare the results instantly--now just $79 for lifetime access. Using AI tools can feel a bit like a juggling act. One model might be great for brainstorming, another for writing code, and another for summarizing documents. Before long, you're bouncing between platforms just to compare results.