Goto

Collaborating Authors

 pentagon


Meet the Gods of AI Warfare

WIRED

In its early days, the AI initiative known as Project Maven had its fair share of skeptics at the Pentagon. Today, many of them are true believers. The rise of AI warfare speaks to the biggest moral and practical question there is: Who--or what--gets to decide to take a human life? And who bears that cost? In 2018, more than 3,000 Google workers protested the company's involvement in "the business of war" after finding out the company was part of Project Maven, then a nascent Pentagon effort to use computer vision to rifle through copious video footage taken in America's overseas drone wars. They feared Project Maven's AI could one day be used for lethal targeting. In my yearslong effort to uncover the full story of Project Maven for my book,, I learned that is exactly what happened, and that the undertaking was just as controversial inside the Pentagon. Today, the tool known as Maven Smart System is being used in US operations against Iran . How the US military's top brass moved from skepticism about the use of AI in war to true believers has a lot to do with a Marine colonel named Drew Cukor. In early September 2024, during the cocktail hour at a private retreat for tech investors and defense leaders, Vice Admiral Frank "Trey" Whitworth found his way to Drew Cukor. Now Project Maven's founding leader and his skeptical successor were standing face-to-face. Three years earlier, Whitworth had been the Pentagon's top military official for intelligence, advising the chairman of the Joint Chiefs of Staff and running one of the most sensitive and potentially lethal parts of any military process: targeting.


Anthropic Denies It Could Sabotage AI Tools During War

WIRED

The Department of Defense alleges the AI developer could manipulate models in the middle of war. Company executives argue that's impossible. Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war . "Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations," Thiyagu Ramasamy, Anthropic's head of public sector, wrote .


At Palantir's Developer Conference, AI Is Built to Win Wars

WIRED

At Palantir's Developer Conference, AI Is Built to Win Wars As business soars, Palantir is doubling down on a vision of AI built for battlefield advantage--and attracting customers who agree. The defense contractors, military officers, and corporate executives in attendance are unprepared for the weather; they'd assumed the previous day's mid-70s temperatures would hold. A cold rain turns to steady snowfall, and Palantir passes out heavy blankets. As people move between open-air pavilions, it looks like they were pulled from shipwrecks. To this self-selecting crowd, Palantir is delivering on its promises.


The Download: The Pentagon's new AI plans, and next-gen nuclear reactors

MIT Technology Review

The Download: The Pentagon's new AI plans, and next-gen nuclear reactors Plus: The OpenClaw frenzy has led to a new Nvidia product. The Pentagon plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing them to train on and learn from classified data is a major new development that presents unique security risks. It would also bring AI firms closer to classified data than ever before. What do new nuclear reactors mean for waste?


Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems

WIRED

Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems In response to Anthropic's lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military. The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department.


The Pentagon is planning for AI companies to train on classified data, defense official says

MIT Technology Review

The generative AI models used in classified environments can answer questions but don't currently learn from the data they see. The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with .


Where OpenAI's technology could show up in Iran

MIT Technology Review

Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).


The Download: how AI is used for military targeting, and the Pentagon's war on Claude

MIT Technology Review

The Download: how AI is used for military targeting, and the Pentagon's war on Claude Plus: an ex-DOGE staffer has been accused of stealing social security data. The US military might use generative AI systems to rank targets and recommend which to strike first, according to a Defense Department official. A list of possible targets could first be fed into a generative AI system that the Pentagon is fielding for classified settings. Humans might then ask the system to analyze the information and prioritize the targets. They would then be responsible for checking and evaluating the results and recommendations. OpenAI's ChatGPT and xAI's Grok could soon be at the center of exactly these sorts of high-stakes military decisions.


Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

The Guardian

Less than a decade ago, Google employees scuttled any military use of its AI. The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war - and what lines it will not cross. Amid Silicon Valley's rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech's answer is looking very different than it did even less than a decade ago. Anthropic's feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government's decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.


A defense official reveals how AI chatbots could be used for targeting decisions

MIT Technology Review

Though the US military's big data initiative Maven has sped up the planning of strikes for years, the comments suggest that generative AI is now adding a new interpretative layer to such deliberations. The US military might use generative AI systems to rank lists of targets and make recommendations--which would be vetted by humans--about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating. A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations.