claude
AI Safety Meets the War Machine
Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. When Anthropic last year became the first major AI company cleared by the US government for classified use--including military applications--the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work.
- South America > Venezuela (0.29)
- Asia > China (0.25)
- North America > United States > California (0.15)
- (4 more...)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.51)
- South America > Venezuela (0.15)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- (9 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
US military used Anthropic's AI model Claude in Venezuela raid, report says
A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the tool was required to comply with its policies. A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the tool was required to comply with its policies. US military used Anthropic's AI model Claude in Venezuela raid, report says Wall Street Journal says Claude used in operation via Anthropic's partnership with Palantir Technologies Sat 14 Feb 2026 11.15 ESTFirst published on Sat 14 Feb 2026 10.53 EST Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations. The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela's defence ministry. Anthropic's terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.
- North America > United States (1.00)
- South America > Venezuela > Capital District > Caracas (0.25)
- Oceania > Australia (0.07)
- (6 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- South America > Venezuela (0.48)
- North America > United States > Texas (0.04)
- North America > United States > South Carolina (0.04)
- (6 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Claude looks even better as free users get more features to play with
PCWorld reports that Anthropic has significantly upgraded Claude's free tier with new Skills feature for automating recurring tasks and Connectors for integrating external services like Canva and Slack. Free users can now create and edit Word, Excel, PowerPoint, and PDF files directly within Claude, plus enjoy longer conversations and enhanced voice/image search capabilities. These improvements position Claude as a stronger competitor against ChatGPT, especially as OpenAI recently introduced ads to its platform. AI company Anthropic is now upgrading the free version of its Claude chatbot with several features that were previously exclusive to paying users, reports Engadget . Free users can now create and edit files like Word documents, Excel spreadsheets, PowerPoint presentations, and PDFs directly within Claude.
- Information Technology > Security & Privacy (0.82)
- Leisure & Entertainment > Games > Computer Games (0.62)
Anthropic beefs up Claude's free tier as OpenAI prepares to stuff ads into ChatGPT's
Samsung Galaxy Unpacked 2026 is Feb. 25 Valve's Steam Machine: Everything we know Anthropic beefs up Claude's free tier as OpenAI prepares to stuff ads into ChatGPT's You no longer need a subscription to create files or use Connectors and Skills in Claude. Anthropic is upgrading Claude's free tier, apparently to capitalize on OpenAI's planned integration of ads into ChatGPT . On Wednesday, Anthropic said free Claude users can now create files, connect to external services, use skills and more. Anthropic added the ability for paid users to create files in September. Starting today, free users of the chatbot can also create and edit Excel spreadsheets, PowerPoint presentations, Word docs and PDFs.
- Marketing (0.58)
- Semiconductors & Electronics (0.39)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.86)
- Marketing (1.00)
- Information Technology (1.00)
- Leisure & Entertainment > Sports > Football (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.76)
Sam Altman's Orb Was Built for the Bot Era. So Why Isn't It Everywhere?
Sam Altman's Orb Was Built for the Bot Era. Welcome back to, TIME's twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? What to Know: Is Sam Altman's Orb missing its moment? When Moltbook, a social network for AI agents, went viral earlier this month, it should have been a vindication moment for Tools for Humanity -- the startup co-founded by Sam Altman, whose eyeball-scanning "Orb" was designed to solve exactly this kind of problem. Instead, it may have exposed the product's limitations.
- North America > United States > California (0.05)
- Europe > France (0.05)
- Africa (0.05)
- North America > United States > Minnesota (0.05)
- North America > United States > Texas (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (4 more...)
- Media > News (1.00)
- Leisure & Entertainment > Sports > Football (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.34)
What Is Claude? Anthropic Doesn't Know, Either
Researchers at the company are trying to understand their A.I. system's mind--examining its neurons, running it through psychology experiments, and putting it on the therapy couch. It has become increasingly clear that Claude's selfhood, much like our own, is a matter of both neurons and narratives. A large language model is nothing more than a monumental pile of small numbers. It converts words into numbers, runs those numbers through a numerical pinball game, and turns the resulting numbers back into words. Similar piles are part of the furniture of everyday life. Meteorologists use them to predict the weather. Epidemiologists use them to predict the paths of diseases. Among regular people, they do not usually inspire intense feelings. But when these A.I. systems began to predict the path of a sentence--that is, to talk--the reaction was widespread delirium. As a cognitive scientist wrote recently, "For hurricanes or pandemics, this is as rigorous as science gets; for sequences of words, everyone seems to lose their mind." It's hard to blame them. Language is, or rather was, our special thing. We weren't prepared for the arrival of talking machines. Ellie Pavlick, a computer scientist at Brown, has drawn up a taxonomy of our most common responses. There are the "fanboys," who man the hype wires. They believe that large language models are intelligent, maybe even conscious, and prophesy that, before long, they will become superintelligent. The venture capitalist Marc Andreessen has described A.I. as "our alchemy, our Philosopher's Stone--we are literally making sand think." The fanboys' deflationary counterparts are the "curmudgeons," who claim that there's no there, and that only a blockhead would mistake a parlor trick for the soul of the new machine. In the recent book " The AI Con," the linguist Emily Bender and the sociologist Alex Hanna belittle L.L.M.s as "mathy maths," "stochastic parrots," and "a racist pile of linear algebra." But, Pavlick writes, "there is another way to react." It is O.K., she offers, "to not know." What Pavlick means, on the most basic level, is that large language models are black boxes. We don't really understand how they work. We don't know if it makes sense to call them intelligent, or if it will ever make sense to call them conscious. The existence of talking machines--entities that can do many of the things that only we have ever been able to do--throws a lot of other things into question. We refer to our own minds as if they weren't also black boxes.
- South America > Colombia (0.14)
- Asia > Russia (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (5 more...)
- Transportation (1.00)
- Leisure & Entertainment > Games (1.00)
- Law (1.00)
- (6 more...)