amodei
What Is Claude? Anthropic Doesn't Know, Either
Researchers at the company are trying to understand their A.I. system's mind--examining its neurons, running it through psychology experiments, and putting it on the therapy couch. It has become increasingly clear that Claude's selfhood, much like our own, is a matter of both neurons and narratives. A large language model is nothing more than a monumental pile of small numbers. It converts words into numbers, runs those numbers through a numerical pinball game, and turns the resulting numbers back into words. Similar piles are part of the furniture of everyday life. Meteorologists use them to predict the weather. Epidemiologists use them to predict the paths of diseases. Among regular people, they do not usually inspire intense feelings. But when these A.I. systems began to predict the path of a sentence--that is, to talk--the reaction was widespread delirium. As a cognitive scientist wrote recently, "For hurricanes or pandemics, this is as rigorous as science gets; for sequences of words, everyone seems to lose their mind." It's hard to blame them. Language is, or rather was, our special thing. We weren't prepared for the arrival of talking machines. Ellie Pavlick, a computer scientist at Brown, has drawn up a taxonomy of our most common responses. There are the "fanboys," who man the hype wires. They believe that large language models are intelligent, maybe even conscious, and prophesy that, before long, they will become superintelligent. The venture capitalist Marc Andreessen has described A.I. as "our alchemy, our Philosopher's Stone--we are literally making sand think." The fanboys' deflationary counterparts are the "curmudgeons," who claim that there's no there, and that only a blockhead would mistake a parlor trick for the soul of the new machine. In the recent book " The AI Con," the linguist Emily Bender and the sociologist Alex Hanna belittle L.L.M.s as "mathy maths," "stochastic parrots," and "a racist pile of linear algebra." But, Pavlick writes, "there is another way to react." It is O.K., she offers, "to not know." What Pavlick means, on the most basic level, is that large language models are black boxes. We don't really understand how they work. We don't know if it makes sense to call them intelligent, or if it will ever make sense to call them conscious. The existence of talking machines--entities that can do many of the things that only we have ever been able to do--throws a lot of other things into question. We refer to our own minds as if they weren't also black boxes.
- South America > Colombia (0.14)
- Asia > Russia (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (5 more...)
- Transportation (1.00)
- Leisure & Entertainment > Games (1.00)
- Law (1.00)
- (6 more...)
Anthropic Is at War With Itself
The AI company shouting about AI's dangers can't quite bring itself to slow down. T hese are not the words you want to hear when it comes to human extinction, but I was hearing them: "Things are moving uncomfortably fast." I was sitting in a conference room with Sam Bowman, a safety researcher at Anthropic. Worth $183 billion at the latest estimate, the AI firm has every incentive to speed things up, ship more products, and develop more advanced chatbots to stay competitive with the likes of OpenAI, Google, and the industry's other giants. But Anthropic is at odds with itself--thinking deeply, even anxiously, about seemingly every decision. Anthropic has positioned itself as the AI industry's superego: the firm that speaks with the most authority about the big questions surrounding the technology, while rival companies develop advertisements and affiliate shopping links (a difference that Anthropic's CEO, Dario Amodei, was eager to call out during an interview in Davos last week).
- Asia > Middle East > Qatar (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Minnesota (0.04)
- (5 more...)
- Government (1.00)
- Banking & Finance (0.68)
- Information Technology > Security & Privacy (0.46)
'Wake up to the risks of AI, they are almost here,' Anthropic boss warns
'Wake up to the risks of AI, they are almost here,' Anthropic boss warns Dario Amodei questions if human systems are ready to handle the'almost unimaginable power' that is'potentially imminent' Humanity is entering a phase of artificial intelligence development that will "test who we are as a species", the boss of the AI startup Anthropic has said, arguing that the world needs to "wake up" to the risks. Dario Amodei, a co-founder and the chief executive of the company behind the hit chatbot Claude, voiced his fears in a 19,000-word essay titled "The adolescence of technology". Describing the arrival of highly powerful AI systems as potentially imminent, he wrote: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species." Amodei added: "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." The tech entrepreneur, whose company is reportedly worth $350bn (£255bn), said his essay was an attempt to "jolt people awake" because the world needed to "wake up" to the need for action on AI safety.
- Europe > United Kingdom (0.31)
- North America > United States (0.17)
- Europe > Ukraine (0.06)
- (3 more...)
- Leisure & Entertainment > Sports (0.72)
- Government > Regional Government > Europe Government > United Kingdom Government (0.31)
Anthropic's Daniela Amodei Believes the Market Will Reward Safe AI
Anthropic's Daniela Amodei Believes the Market Will Reward Safe AI The Trump administration might think regulation is killing the AI industry, but Anthropic president Daniela Amodei disagrees. The Trump administration may think regulation is crippling the AI industry, but one of the industry's biggest players doesn't agree. At WIRED's Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told WIRED editor at large Steven Levy that even though Trump's AI and crypto czar, David Sacks, may have tweeted that her company is "running a sophisticated regulatory capture strategy based on fear-mongering," she's convinced her company's commitment to calling out the potential dangers of AI is making the industry stronger. WIRED's iconic series returned to San Francisco with a series of unforgettable, in-depth live conversations. Check out more highlights here .
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- (3 more...)
- Media (0.72)
- Government (0.70)
AI firms must be clear on risks or repeat tobacco's mistakes, says Anthropic chief
The Anthropic chief executive, Dario Amodei, has flagged various concerns about its AI models recently. The Anthropic chief executive, Dario Amodei, has flagged various concerns about its AI models recently. AI firms must be clear on risks or repeat tobacco's mistakes, says Anthropic chief Artificial intelligence will become smarter than'most or all humans in most or all ways', says Dario Amodei Mon 17 Nov 2025 06.35 ESTLast modified on Mon 17 Nov 2025 07.25 EST Artificial intelligence companies must be transparent about the risks posed by their products or be in danger of repeating the mistakes of tobacco and opioid companies, according to the chief executive of the AI startup Anthropic. Dario Amodei, who runs the US company behind the Claude chatbot, said he believed AI would become smarter than "most or all humans in most or all ways" and urged his peers to "call it as you see it". Speaking to CBS News, Amodei said a lack of transparency about the impact of powerful AI would replay the errors of cigarette and opioid firms that failed to raise a red flag over the potential health damage of their own products.
- North America > United States (0.51)
- Europe > Ukraine (0.06)
- Oceania > Australia (0.05)
- Leisure & Entertainment > Sports (0.72)
- Media > News (0.68)
- Government > Regional Government (0.51)
- Information Technology > Communications > Social Media (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.35)
Inside Anthropic's Big Washington Push
Inside Anthropic's Big Washington Push Welcome back to In the Loop, new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? The AI industry has descended upon Washington. The industry recently pledged up to $200 million toward new super PACs aimed at influencing upcoming elections. And on Monday, I attended an event that epitomized this swell of capital and effort: The Anthropic Futures Forum.
- Asia > China (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- North America > United States > California (0.05)
Revealed: The richest and youngest AI-billionaires making fortune from the big tech boom
From helping you answer emails to translating legal documents, artificial intelligence is now a part of almost all facets of life. Meanwhile, organisations from Microsoft and Apple to the NHS have piled vast sums of funding into the latest intelligent software. And for the few people behind this AI boom, there have been enormous profits to be made. Leading the pack as the richest of new AI billionaires is Jensen Huang, CEO of chipmaker Nvidia, with a staggering net-worth of 113 billion ( 151bn). Mr Huang joins several monumental big tech figures, such as Meta's Mark Zuckerberg and Elon Musk, who have recently made huge investments in AI.
- Asia > China (0.07)
- North America > United States > California (0.05)
- North America > United States > New York (0.05)
- Information Technology > Services (0.49)
- Banking & Finance > Trading (0.49)
Leaked Memo: Anthropic CEO Says the Company Will Pursue Gulf State Investments After All
Anthropic is planning to seek investment from the United Arab Emirates and Qatar, according to a Slack message CEO Dario Amodei sent to staff Sunday morning, which WIRED obtained. Weighing the pros and cons, Amodei acknowledged in his note that accepting money from Middle East leaders would likely enrich "dictators." "This is a real downside and I'm not thrilled about it," he wrote. "Unfortunately, I think'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on." The message comes as AI companies race to secure the massive amounts of capital required to train and develop frontier AI models.
- Asia > Middle East > Qatar (0.26)
- Asia > Middle East > Saudi Arabia (0.23)
- Europe > Middle East (0.08)
- (2 more...)
- Government (0.79)
- Commercial Services & Supplies > Security & Alarm Services (0.37)
US attacks on science and research a 'great gift' to China on artificial intelligence, former OpenAI board member says
The US administration's targeting of academic research and international students is a "great gift" to China in the race to compete on artificial intelligence, former OpenAI board member Helen Toner has said. The director of strategy at Georgetown's Center for Security and Emerging Technology (CSET) joined the board of OpenAI in 2021 after a career studying AI and the relationship between the United States and China. Toner, a 33-year-old University of Melbourne graduate, was on the board for two years until a falling out with founder Sam Altman in 2023. Altman was fired by the board over claims that he was not "consistently candid" in his communications and the board did not have confidence in Altman's ability to lead. The chaotic months that followed saw Altman fired and then re-hired with three members of the board, including Toner, ousted instead.
- North America > United States (1.00)
- Asia > China (0.88)
- Oceania > Australia (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.92)
Will AI wipe out the first rung of the career ladder?
This week, I'm wondering what my first jobs in journalism would have been like had generative AI been around. In other news: Elon Musk leaves a trail of chaos, and influencers are selling the text they fed to AI to make art. Generative artificial intelligence may eliminate the job you got with your diploma still in hand, say executives who offered grim assessments of the entry-level job market last week in multiple forums. Dario Amodei, CEO of Anthropic, which makes the multifunctional AI model Claude, told Axios last week that he believes that AI could cut half of all entry-level white-collar jobs and send overall unemployment rocketing to 20% within the next five years. One explanation why an AI company CEO might make such a dire prediction is to hype the capabilities of his product.
- North America > United States > New York > New York County > New York City (0.04)
- Indian Ocean (0.04)
- Asia > Pakistan (0.04)
- (2 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Economy (1.00)
- Media > News (0.90)
- Information Technology > Services (0.70)