Goto

Collaborating Authors

 anthropic


Join Our Next Livestream: The War Machine

WIRED

The app reads your email inbox and your meeting calendar, then gives you a short audio summary. It can help you spend less time scrolling, but of course, there are privacy drawbacks to consider.


Defense Department says Anthropic poses 'unacceptable risk' to national security

Engadget

Defense Department says Anthropic poses'unacceptable risk' to national security The Pentagon has submitted a court filing in response to Anthropic's lawsuit challenging its'supply chain risk' designation. The Department of Defense said giving Anthropic continued access to its warfighting infrastructure would "introduce unacceptable risk" to its supply chains in a court filing submitted in response to the AI company's lawsuit. If you'll recall, Anthropic sued the government to challenge the supply chain risk designation it received for refusing to allow its model to be used for mass surveillance and the development of autonomous weapons. In its filing, the department explained that its secretary, Pete Hegseth, had a provision incorporated into AI service contracts, allowing the agency to use their technologies for any lawful purpose. Anthropic refused its terms and apparently, the company's behavior caused the Pentagon to question whether it truly was a "trusted partner" that it could work with when it comes to "highly sensitive" initiatives.


Trump administration defends Anthropic blacklisting in US court

Al Jazeera

Has Trump failed to sell the Iran war to the world? Are US-Israeli attacks against Iran legal? The administration of United States President Donald Trump has said in a court filing that the Pentagon's blacklisting of Anthropic was justified and lawful, opposing the artificial intelligence company's high-stakes lawsuit challenging the decision. The administration made its comments in a court filing on Tuesday. The Trump administration's filing says Anthropic is unlikely to succeed in its claims that the US government's action violated speech protections under the US Constitution's First Amendment, asserting that the dispute stems from contract negotiations and national security concerns, not retaliation.


Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems

WIRED

Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems In response to Anthropic's lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military. The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department.


The Download: OpenAI's US military deal, and Grok's CSAM lawsuit

MIT Technology Review

Plus: China has approved the world's first commercial brain chip. Where OpenAI's technology could show up in Iran OpenAI has controversially agreed to give the Pentagon access to its AI. But where exactly could its tech show up, and which applications will its customers and employees tolerate? There's pressure to integrate it quickly with existing military tools. One defense official revealed it could even assist in selecting strike targets. OpenAI's partnership with Anduril, which makes drones and counter-drone technologies, adds another hint at what is to come.


AI firm Anthropic seeks weapons expert to stop users from 'misuse'

BBC News

AI firm Anthropic seeks weapons expert to stop users from'misuse' The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software. In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust. In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in chemical weapons and/or explosives defence as well as knowledge of radiological dispersal devices - also known as dirty bombs. The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created. Anthropic is not the only AI firm adopting this strategy.


WIRED Article Production automation page/Only for QA/Do not click/Do not publish

WIRED

The app reads your email inbox and your meeting calendar, then gives you a short audio summary. It can help you spend less time scrolling, but of course, there are privacy drawbacks to consider. WIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don't just reflect the moment--they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future.


Where OpenAI's technology could show up in Iran

MIT Technology Review

Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).



The Download: glass chips and "AI-free" logos

MIT Technology Review

Plus: Elizabeth Warren wants answers on xAI's access to military data. Human-made glass is thousands of years old. But it's now poised to find its way into the AI chips used in the world's newest and largest data centers. This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area. If all goes well, the technology could reduce the energy demands of chips in AI data centers--and even consumer laptops and mobile devices.