AI Safety Meets the War Machine

WIRED 

Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. When Anthropic last year became the first major AI company cleared by the US government for classified use--including military applications--the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work.