Goto

Collaborating Authors

 amodei


Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

The Guardian

Less than a decade ago, Google employees scuttled any military use of its AI. The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war - and what lines it will not cross. Amid Silicon Valley's rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech's answer is looking very different than it did even less than a decade ago. Anthropic's feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government's decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.


Dario Amodei's Oppenheimer Moment

The Atlantic - Technology

It came earlier than expected. More than a year before his recent standoff with the Pentagon, Dario Amodei, the chief executive of Anthropic, published a 15,000-word manifesto describing a glorious AI future. Its title, "Machines of Loving Grace," is borrowed from a Richard Brautigan poem, but as Amodei acknowledged, with some embarrassment, its utopian vision bears some resemblance to science fiction. According to Amodei, we will soon create the first polymath AIs with abilities that surpass those of Nobel Prize winners in "most relevant fields," and we'll have millions of them, a "country of geniuses," all packed into the glowing server racks of a data center, working together. With access to tools that operate directly on our physical world, these AIs would be able to get up to a great deal of dangerous mischief, but according to Amodei, if they're developed--or "grown," as staffers at Anthropic are fond of saying--in the correct way, they will decide to greatly improve our lives. Amodei does not explain precisely how the AIs will accomplish this.


How AI firm Anthropic wound up in the Pentagon's crosshairs

The Guardian

This week has brought more chaos in the feud between the Pentagon and Anthropic. This week has brought more chaos in the feud between the Pentagon and Anthropic. How AI firm Anthropic wound up in the Pentagon's crosshairs U ntil recently, Anthropic was one of the quieter names in the artificial intelligence boom. Despite being valued at about $350bn, it rarely generated the flashy headlines or public backlash associated with Sam Altman's OpenAI or Elon Musk's xAI. Its CEO and co-founder Dario Amodei was an industry fixture but hardly a household name outside of Silicon Valley, and its chatbot Claude lagged in popularity behind ChatGPT.


What Is Claude? Anthropic Doesn't Know, Either

The New Yorker

Researchers at the company are trying to understand their A.I. system's mind--examining its neurons, running it through psychology experiments, and putting it on the therapy couch. It has become increasingly clear that Claude's selfhood, much like our own, is a matter of both neurons and narratives. A large language model is nothing more than a monumental pile of small numbers. It converts words into numbers, runs those numbers through a numerical pinball game, and turns the resulting numbers back into words. Similar piles are part of the furniture of everyday life. Meteorologists use them to predict the weather. Epidemiologists use them to predict the paths of diseases. Among regular people, they do not usually inspire intense feelings. But when these A.I. systems began to predict the path of a sentence--that is, to talk--the reaction was widespread delirium. As a cognitive scientist wrote recently, "For hurricanes or pandemics, this is as rigorous as science gets; for sequences of words, everyone seems to lose their mind." It's hard to blame them. Language is, or rather was, our special thing. We weren't prepared for the arrival of talking machines. Ellie Pavlick, a computer scientist at Brown, has drawn up a taxonomy of our most common responses. There are the "fanboys," who man the hype wires. They believe that large language models are intelligent, maybe even conscious, and prophesy that, before long, they will become superintelligent. The venture capitalist Marc Andreessen has described A.I. as "our alchemy, our Philosopher's Stone--we are literally making sand think." The fanboys' deflationary counterparts are the "curmudgeons," who claim that there's no there, and that only a blockhead would mistake a parlor trick for the soul of the new machine. In the recent book " The AI Con," the linguist Emily Bender and the sociologist Alex Hanna belittle L.L.M.s as "mathy maths," "stochastic parrots," and "a racist pile of linear algebra." But, Pavlick writes, "there is another way to react." It is O.K., she offers, "to not know." What Pavlick means, on the most basic level, is that large language models are black boxes. We don't really understand how they work. We don't know if it makes sense to call them intelligent, or if it will ever make sense to call them conscious. The existence of talking machines--entities that can do many of the things that only we have ever been able to do--throws a lot of other things into question. We refer to our own minds as if they weren't also black boxes.



Anthropic Is at War With Itself

The Atlantic - Technology

The AI company shouting about AI's dangers can't quite bring itself to slow down. T hese are not the words you want to hear when it comes to human extinction, but I was hearing them: "Things are moving uncomfortably fast." I was sitting in a conference room with Sam Bowman, a safety researcher at Anthropic. Worth $183 billion at the latest estimate, the AI firm has every incentive to speed things up, ship more products, and develop more advanced chatbots to stay competitive with the likes of OpenAI, Google, and the industry's other giants. But Anthropic is at odds with itself--thinking deeply, even anxiously, about seemingly every decision. Anthropic has positioned itself as the AI industry's superego: the firm that speaks with the most authority about the big questions surrounding the technology, while rival companies develop advertisements and affiliate shopping links (a difference that Anthropic's CEO, Dario Amodei, was eager to call out during an interview in Davos last week).


'Wake up to the risks of AI, they are almost here,' Anthropic boss warns

The Guardian

'Wake up to the risks of AI, they are almost here,' Anthropic boss warns Dario Amodei questions if human systems are ready to handle the'almost unimaginable power' that is'potentially imminent' Humanity is entering a phase of artificial intelligence development that will "test who we are as a species", the boss of the AI startup Anthropic has said, arguing that the world needs to "wake up" to the risks. Dario Amodei, a co-founder and the chief executive of the company behind the hit chatbot Claude, voiced his fears in a 19,000-word essay titled "The adolescence of technology". Describing the arrival of highly powerful AI systems as potentially imminent, he wrote: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species." Amodei added: "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." The tech entrepreneur, whose company is reportedly worth $350bn (£255bn), said his essay was an attempt to "jolt people awake" because the world needed to "wake up" to the need for action on AI safety.


Anthropic's Daniela Amodei Believes the Market Will Reward Safe AI

WIRED

Anthropic's Daniela Amodei Believes the Market Will Reward Safe AI The Trump administration might think regulation is killing the AI industry, but Anthropic president Daniela Amodei disagrees. The Trump administration may think regulation is crippling the AI industry, but one of the industry's biggest players doesn't agree. At WIRED's Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told WIRED editor at large Steven Levy that even though Trump's AI and crypto czar, David Sacks, may have tweeted that her company is "running a sophisticated regulatory capture strategy based on fear-mongering," she's convinced her company's commitment to calling out the potential dangers of AI is making the industry stronger. WIRED's iconic series returned to San Francisco with a series of unforgettable, in-depth live conversations. Check out more highlights here .


AI firms must be clear on risks or repeat tobacco's mistakes, says Anthropic chief

The Guardian

The Anthropic chief executive, Dario Amodei, has flagged various concerns about its AI models recently. The Anthropic chief executive, Dario Amodei, has flagged various concerns about its AI models recently. AI firms must be clear on risks or repeat tobacco's mistakes, says Anthropic chief Artificial intelligence will become smarter than'most or all humans in most or all ways', says Dario Amodei Mon 17 Nov 2025 06.35 ESTLast modified on Mon 17 Nov 2025 07.25 EST Artificial intelligence companies must be transparent about the risks posed by their products or be in danger of repeating the mistakes of tobacco and opioid companies, according to the chief executive of the AI startup Anthropic. Dario Amodei, who runs the US company behind the Claude chatbot, said he believed AI would become smarter than "most or all humans in most or all ways" and urged his peers to "call it as you see it". Speaking to CBS News, Amodei said a lack of transparency about the impact of powerful AI would replay the errors of cigarette and opioid firms that failed to raise a red flag over the potential health damage of their own products.


Inside Anthropic's Big Washington Push

TIME - Tech

Inside Anthropic's Big Washington Push Welcome back to In the Loop, new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? The AI industry has descended upon Washington. The industry recently pledged up to $200 million toward new super PACs aimed at influencing upcoming elections. And on Monday, I attended an event that epitomized this swell of capital and effort: The Anthropic Futures Forum.