Goto

Collaborating Authors

 pentagon


The Download: The Pentagon's new AI plans, and next-gen nuclear reactors

MIT Technology Review

The Download: The Pentagon's new AI plans, and next-gen nuclear reactors Plus: The OpenClaw frenzy has led to a new Nvidia product. The Pentagon plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing them to train on and learn from classified data is a major new development that presents unique security risks. It would also bring AI firms closer to classified data than ever before. What do new nuclear reactors mean for waste?


Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems

WIRED

Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems In response to Anthropic's lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military. The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department.


The Pentagon is planning for AI companies to train on classified data, defense official says

MIT Technology Review

The generative AI models used in classified environments can answer questions but don't currently learn from the data they see. The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with .


Where OpenAI's technology could show up in Iran

MIT Technology Review

Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).


The Download: how AI is used for military targeting, and the Pentagon's war on Claude

MIT Technology Review

The Download: how AI is used for military targeting, and the Pentagon's war on Claude Plus: an ex-DOGE staffer has been accused of stealing social security data. The US military might use generative AI systems to rank targets and recommend which to strike first, according to a Defense Department official. A list of possible targets could first be fed into a generative AI system that the Pentagon is fielding for classified settings. Humans might then ask the system to analyze the information and prioritize the targets. They would then be responsible for checking and evaluating the results and recommendations. OpenAI's ChatGPT and xAI's Grok could soon be at the center of exactly these sorts of high-stakes military decisions.


Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

The Guardian

Less than a decade ago, Google employees scuttled any military use of its AI. The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war - and what lines it will not cross. Amid Silicon Valley's rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech's answer is looking very different than it did even less than a decade ago. Anthropic's feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government's decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.


A defense official reveals how AI chatbots could be used for targeting decisions

MIT Technology Review

Though the US military's big data initiative Maven has sped up the planning of strikes for years, the comments suggest that generative AI is now adding a new interpretative layer to such deliberations. The US military might use generative AI systems to rank lists of targets and make recommendations--which would be vetted by humans--about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating. A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations.


Microsoft backs AI firm Anthropic in legal battle against Pentagon

The Guardian

Microsoft has thrown its weight behind Anthropic's legal challenge against the US Department of Defense. Microsoft has thrown its weight behind Anthropic's legal challenge against the US Department of Defense. Tech company files amicus brief in support of Anthropic's effort to overturn an aggressive Pentagon designation Microsoft has thrown its weight behind Anthropic's legal challenge against the Pentagon, filing a court brief in support of the AI company's effort to overturn an aggressive designation that effectively bars it from government work. In an amicus brief submitted to a federal court in San Francisco this week, Microsoft, which integrates Anthropic's AI tools into systems it provides to the US military, argued that a temporary restraining order was necessary to prevent serious disruption to suppliers whose products rely on the AI company's technology. Google, Amazon, Apple and OpenAI have also signed on to a brief in support of Anthropic. In a statement to the Guardian, Microsoft said: "The Department of War needs reliable access to the country's best technology.


Dario Amodei's Oppenheimer Moment

The Atlantic - Technology

It came earlier than expected. More than a year before his recent standoff with the Pentagon, Dario Amodei, the chief executive of Anthropic, published a 15,000-word manifesto describing a glorious AI future. Its title, "Machines of Loving Grace," is borrowed from a Richard Brautigan poem, but as Amodei acknowledged, with some embarrassment, its utopian vision bears some resemblance to science fiction. According to Amodei, we will soon create the first polymath AIs with abilities that surpass those of Nobel Prize winners in "most relevant fields," and we'll have millions of them, a "country of geniuses," all packed into the glowing server racks of a data center, working together. With access to tools that operate directly on our physical world, these AIs would be able to get up to a great deal of dangerous mischief, but according to Amodei, if they're developed--or "grown," as staffers at Anthropic are fond of saying--in the correct way, they will decide to greatly improve our lives. Amodei does not explain precisely how the AIs will accomplish this.


What Anthropic's Clash With the Pentagon Is Really About

The Atlantic - Technology

What Anthropic's Clash With the Pentagon Is Really About Who will take responsibility for the technology? The weekslong conflict between Anthropic and the Department of Defense is entering a new phase. After being designated a supply-chain risk by DOD last week, which effectively forbids Pentagon contractors from using its products, the AI company filed a lawsuit against DOD this morning alleging that the government's actions were unconstitutional and ideologically motivated. Then, this afternoon, 37 employees from OpenAI and Google DeepMind--including Google's chief scientist, Jeff Dean--signed an amicus brief in support of Anthropic, in essence lending support to one of their employers' greatest business rivals (even as OpenAI itself has established a controversial new contract with DOD). For the past few weeks, Anthropic has been in heated negotiations with the Pentagon over how the U.S. military can use the firm's AI systems.