claude
Anthropic Denies It Could Sabotage AI Tools During War
The Department of Defense alleges the AI developer could manipulate models in the middle of war. Company executives argue that's impossible. Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war . "Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations," Thiyagu Ramasamy, Anthropic's head of public sector, wrote .
- North America > United States > California > San Francisco County > San Francisco (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- North America > United States > Arizona (0.05)
- (3 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
The Human Skill That Eludes AI
Why can't language models write well? I n a certain, strange way, generative AI peaked with OpenAI's GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. "You could be like, 'Continue this story:,' and GPT-2 would be like, ','" Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. "The models won't do that anymore." AI leaders boast about their models' superhuman technical abilities.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.56)
Anthropic is doubling Claude's usage limits during off-peak hours for the next two weeks
Anthropic is doubling Claude's usage limits during off-peak hours for the next two weeks The promotion runs from March 13 to March 27. To capitalize on Claude's recent spike in popularity, Anthropic is offering a limited-time promotion that doubles usage limits for anyone using its AI chatbot during off-peak hours. From March 13 to March 27, users on Free, Pro, Max, and Team plans will get double the usage limits in a five-hour window when using Claude outside weekday hours between 8 AM and 2 PM ET. According to Anthropic, the promotion is automatic, and users don't have to enable anything to get the benefits. A small thank you to everyone using Claude: We're doubling usage outside our peak hours for the next two weeks.
- Marketing (0.45)
- Government (0.38)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.90)
Claude can now generate charts and diagrams
Claude can now generate visuals when producing a response. With Claude enjoying a moment of newfound popularity among regular people, Anthropic is previewing an update designed to make its chatbot better at explaining some concepts. Starting today, Claude can generate charts and diagrams as part of its responses, either when asked directly or when it decides visuals might be helpful to the user. For example, try asking Claude what's the best way to fold a paper plane. Where previously it was limited to text, now it can show you step by step how to fold a Nakamura lock plane .
- Health & Medicine (0.53)
- Marketing (0.46)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.50)
How AI firm Anthropic wound up in the Pentagon's crosshairs
This week has brought more chaos in the feud between the Pentagon and Anthropic. This week has brought more chaos in the feud between the Pentagon and Anthropic. How AI firm Anthropic wound up in the Pentagon's crosshairs U ntil recently, Anthropic was one of the quieter names in the artificial intelligence boom. Despite being valued at about $350bn, it rarely generated the flashy headlines or public backlash associated with Sam Altman's OpenAI or Elon Musk's xAI. Its CEO and co-founder Dario Amodei was an industry fixture but hardly a household name outside of Silicon Valley, and its chatbot Claude lagged in popularity behind ChatGPT.
- North America > United States > California (0.25)
- Asia > Middle East > Iran (0.06)
- Europe > Ukraine (0.05)
- (2 more...)
- Law (1.00)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- (2 more...)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.95)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.72)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.58)
Claude AI: Why are there so many internet outages?
Claude AI: Why are there so many internet outages? AI chatbot Claude going down is just one example of a recent IT outage. Anthropic's Claude chatbot recently had service troubles This week, AI chatbot Claude went down, leaving users unable to access the service via its maker Anthropic's website, but barely a week goes by without a similar incident at a technology giant, government website or hospital . One of the main vulnerabilities of the modern internet is the shift to cloud computing, meaning a huge range of websites and services now rely on just a handful of companies, such as Amazon and Microsoft. In the early days of the commercial internet in the 1990s, companies used to operate their own hardware and software, a bit like individual shops in a street.
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government > Military > Cyberwarfare (0.30)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
AI Safety Meets the War Machine
Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. When Anthropic last year became the first major AI company cleared by the US government for classified use--including military applications--the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work.
- South America > Venezuela (0.29)
- Asia > China (0.25)
- North America > United States > California (0.15)
- (4 more...)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.51)
- South America > Venezuela (0.15)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- (9 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
US military used Anthropic's AI model Claude in Venezuela raid, report says
A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the tool was required to comply with its policies. A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the tool was required to comply with its policies. US military used Anthropic's AI model Claude in Venezuela raid, report says Wall Street Journal says Claude used in operation via Anthropic's partnership with Palantir Technologies Sat 14 Feb 2026 11.15 ESTFirst published on Sat 14 Feb 2026 10.53 EST Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations. The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela's defence ministry. Anthropic's terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.
- North America > United States (1.00)
- South America > Venezuela > Capital District > Caracas (0.25)
- Oceania > Australia (0.07)
- (6 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- South America > Venezuela (0.48)
- North America > United States > Texas (0.04)
- North America > United States > South Carolina (0.04)
- (6 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)