Goto

Collaborating Authors

 claude


Anthropic Denies It Could Sabotage AI Tools During War

WIRED

The Department of Defense alleges the AI developer could manipulate models in the middle of war. Company executives argue that's impossible. Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war . "Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations," Thiyagu Ramasamy, Anthropic's head of public sector, wrote .


The Human Skill That Eludes AI

The Atlantic - Technology

Why can't language models write well? I n a certain, strange way, generative AI peaked with OpenAI's GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. "You could be like, 'Continue this story:,' and GPT-2 would be like, ','" Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. "The models won't do that anymore." AI leaders boast about their models' superhuman technical abilities.


Anthropic is doubling Claude's usage limits during off-peak hours for the next two weeks

Engadget

Anthropic is doubling Claude's usage limits during off-peak hours for the next two weeks The promotion runs from March 13 to March 27. To capitalize on Claude's recent spike in popularity, Anthropic is offering a limited-time promotion that doubles usage limits for anyone using its AI chatbot during off-peak hours. From March 13 to March 27, users on Free, Pro, Max, and Team plans will get double the usage limits in a five-hour window when using Claude outside weekday hours between 8 AM and 2 PM ET. According to Anthropic, the promotion is automatic, and users don't have to enable anything to get the benefits. A small thank you to everyone using Claude: We're doubling usage outside our peak hours for the next two weeks.


Claude can now generate charts and diagrams

Engadget

Claude can now generate visuals when producing a response. With Claude enjoying a moment of newfound popularity among regular people, Anthropic is previewing an update designed to make its chatbot better at explaining some concepts. Starting today, Claude can generate charts and diagrams as part of its responses, either when asked directly or when it decides visuals might be helpful to the user. For example, try asking Claude what's the best way to fold a paper plane. Where previously it was limited to text, now it can show you step by step how to fold a Nakamura lock plane .


How AI firm Anthropic wound up in the Pentagon's crosshairs

The Guardian

This week has brought more chaos in the feud between the Pentagon and Anthropic. This week has brought more chaos in the feud between the Pentagon and Anthropic. How AI firm Anthropic wound up in the Pentagon's crosshairs U ntil recently, Anthropic was one of the quieter names in the artificial intelligence boom. Despite being valued at about $350bn, it rarely generated the flashy headlines or public backlash associated with Sam Altman's OpenAI or Elon Musk's xAI. Its CEO and co-founder Dario Amodei was an industry fixture but hardly a household name outside of Silicon Valley, and its chatbot Claude lagged in popularity behind ChatGPT.


Claude AI: Why are there so many internet outages?

New Scientist

Claude AI: Why are there so many internet outages? AI chatbot Claude going down is just one example of a recent IT outage. Anthropic's Claude chatbot recently had service troubles This week, AI chatbot Claude went down, leaving users unable to access the service via its maker Anthropic's website, but barely a week goes by without a similar incident at a technology giant, government website or hospital . One of the main vulnerabilities of the modern internet is the shift to cloud computing, meaning a huge range of websites and services now rely on just a handful of companies, such as Amazon and Microsoft. In the early days of the commercial internet in the 1990s, companies used to operate their own hardware and software, a bit like individual shops in a street.


AI Safety Meets the War Machine

WIRED

Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. When Anthropic last year became the first major AI company cleared by the US government for classified use--including military applications--the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work.


Maduro raid questions trigger Pentagon review of top AI firm as potential 'supply chain risk'

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG .


US military used Anthropic's AI model Claude in Venezuela raid, report says

The Guardian

A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the tool was required to comply with its policies. A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the tool was required to comply with its policies. US military used Anthropic's AI model Claude in Venezuela raid, report says Wall Street Journal says Claude used in operation via Anthropic's partnership with Palantir Technologies Sat 14 Feb 2026 11.15 ESTFirst published on Sat 14 Feb 2026 10.53 EST Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations. The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela's defence ministry. Anthropic's terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.


AI tool Claude helped capture Venezuelan dictator Maduro in US military raid operation: report

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG .