Goto

Collaborating Authors

 claude code


Anthropic doubles Claude Code limits, thanks to a deal with SpaceX

PCWorld

Anthropic has partnered with SpaceX to double Claude Code usage limits across Pro, Max, Team, and Enterprise plans, according to PCWorld. The deal provides access to SpaceX's Colossus 1 data center featuring over 220,000 Nvidia GPUs, significantly boosting Anthropic's computing capacity. This partnership marks a surprising shift, as Elon Musk previously criticized Anthropic but recently expressed being impressed after meetings with company staff. Instead of downgrading its most affordable Claude subscription plan by dropping access to Claude Code, Anthropic has instead doubled Claude Code usage rates for subscribers, starting today. All it took was an eyebrow-raising alliance with an unlikely partner.


Anthropic Gets in Bed With SpaceX as the AI Race Turns Weird

WIRED

In an unexpected turn, the two companies signed a deal for Anthropic to use computing resources from Elon Musk's xAI. Anthropic and Elon Musk's SpaceX said on Wednesday that the two entities have signed an agreement for Anthropic to use computing resources from xAI's data center in Memphis, Tennessee. It's the latest tie up in an industry that is scrambling to find enough computers to run complex AI software. SpaceX and xAI were previously separate companies, but the two merged earlier this year. The combined entity, also owned by Musk, is called SpaceXAI.


The 20 AI subscription era has become untenable

PCWorld

PCWorld reports that current $20 flat-rate AI subscriptions from OpenAI, Anthropic, and others are becoming financially unsustainable for providers. GitHub Copilot has already switched to expensive usage-based pricing, while Anthropic considers removing advanced features from Claude Pro plans. Users should expect significant price increases as the true cost of powerful AI agents far exceeds current subscription fees.


Anthropic's Little Brother

The Atlantic - Technology

OpenAI is racing to catch up to its greatest rival. OpenAI does not like to be left out. The week after Anthropic announced Claude Mythos Preview --an AI model that has put governments around the world on edge because of its potential ability to hack into banks, energy grids, and military systems--OpenAI shared a program that is uncannily similar. And just like Anthropic did with its model, OpenAI has, for cybersecurity purposes, restricted access to this new bot, called GPT-5.4-Cyber, to a small group of trusted users. This sequence has become something of a pattern: First Anthropic will make an announcement, and then OpenAI will follow suit.


Google Shakes Up Its Browser Agent Team Amid OpenClaw Craze

WIRED

As Silicon Valley obsesses over a new wave of AI coding agents, Google and other AI labs are shifting their bets. Google is shaking up the team behind Project Mariner, its AI agent that can navigate the Chrome browser and complete tasks on a user's behalf, WIRED has learned. In recent months, some Google Labs staffers who worked on the research prototype have moved on to higher-priority projects, according to two people familiar with the matter. A Google spokesperson confirmed the changes, but said the computer use capabilities developed under Project Mariner will be incorporated into the company's agent strategy moving forward. Google has already folded some of these capabilities into other agent products, including the recently launched Gemini Agent, the spokesperson added.


Vibe coding apps taught me how hard real coding is

PCWorld

PCWorld explores the reality of "vibe coding" with AI tools, where the author attempted to build four apps using Claude Code and Google's Antigravity. Only one Docker Swarm dashboard succeeded after a week of effort, while three OpenClaw replications failed due to vague prompts and poor planning. The experience reveals that AI-assisted development still requires significant human creativity, detailed blueprints, and specific instructions to avoid "garbage in, garbage out" results. Like so many others, I jumped onto the vibe coding bandwagon, entranced by the idea of building my own incredibly useful apps with nothing but an AI prompt. Over the course of about six weeks, I did manage to build my own apps-four of them, to be precise.


AI Agents Are Taking America by Storm

The Atlantic - Technology

The post-chatbot era has begun. Americans are living in parallel AI universes. For much of the country, AI has come to mean ChatGPT, Google's AI overviews, and the slop that now clogs social-media feeds. Meanwhile, tech hobbyists are becoming radicalized by bots that can work for hours on end, collapsing months of work into weeks, or weeks into an afternoon. Recently, more people have started to play around with tools such as Claude Code .


Rules fail at the prompt, succeed at the boundary

MIT Technology Review

From the Gemini Calendar prompt-injection attack of 2026 to the September 2025 state-sponsored hack using Anthropic's Claude code as an automated intrusion engine, the coercion of human-in-the-loop agentic actions and fully autonomous agentic workflows are the new attack vector for hackers. In the Anthropic case, roughly 30 organizations across tech, finance, manufacturing, and government were affected. Anthropic's threat team assessed that the attackers used AI to carry out 80% to 90% of the operation: reconnaissance, exploit development, credential harvesting, lateral movement, and data exfiltration, with humans stepping in only at a handful of key decision points. This was not a lab demo; it was a live espionage campaign. The attackers hijacked an agentic setup (Claude code plus tools exposed via Model Context Protocol (MCP)) and jailbroke it by decomposing the attack into small, seemingly benign tasks and telling the model it was doing legitimate penetration testing. The same loop that powers developer copilots and internal agents was repurposed as an autonomous cyber-operator.


The Math on AI Agents Doesn't Add Up

WIRED

The Math on AI Agents Doesn't Add Up A research paper suggests AI agents are mathematically doomed to fail. The big AI companies promised us that 2025 would be "the year of the AI agents." It turned out to be the year of AI agents, and kicking the can for that transformational moment to 2026 or maybe later. But what if the answer to the question "When will our lives be fully automated by generative AI robots that perform our tasks for us and basically run the world?" is, like that New Yorker cartoon, "How about never?" That was basically the message of a paper published without much fanfare some months ago, smack in the middle of the overhyped year of "agentic AI." Entitled " Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models," it purports to mathematically show that "LLMs are incapable of carrying out computational and agentic tasks beyond a certain complexity."


How Claude Code Is Reshaping Software--and Anthropic

WIRED

WIRED spoke with Boris Cherny, head of Claude Code, about how the viral coding tool is changing the way Anthropic works. Engineers in Silicon Valley have been raving about Anthropic's AI coding tool, Claude Code, for months. But recently, the buzz feels as if it's reached a fever pitch. Earlier this week, I sat down with Boris Cherny, head of Claude Code, to try to understand how the company is meeting this moment. "We built the simplest possible thing," said Cherny. "The craziest thing was learning three months ago that half of the sales team at Anthropic uses Claude Code every week."