anthropic
SpaceX backs Anthropic with data centre deal amidst Musk's OpenAI lawsuit
SpaceX backs Anthropic with data centre deal amidst Musk's OpenAI lawsuit Anthropic has reached a deal to tap the computing resources of Elon Musk's SpaceX, marking a detente with its one-time critic and a boost for both companies in the high-stakes artificial intelligence race. Under the agreement announced on Wednesday, Anthropic will use the full computing power of SpaceX's Colossus 1 facility in Memphis, Tennessee, which houses more than 220,000 Nvidia processors and will give the Claude chatbot maker 300 megawatts of new capacity within a month. That's enough electricity to power more than 300,000 homes - as the Dario Amodei-led company seeks to boost the capacity of its Claude Pro and Claude Max AI assistants for subscribers. The tool allows AI systems to review work between sessions, spot patterns, and update files that store user preferences and other context. Available as a research preview, "dreaming" comes with software for managing agents, or AI programmes that perform tasks with little human involvement.
- Aerospace & Defense (1.00)
- Law > Litigation (0.62)
- Information Technology > Services (0.62)
- Government > Regional Government > North America Government > United States Government (0.32)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.99)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.69)
Anthropic doubles Claude Code limits, thanks to a deal with SpaceX
Anthropic has partnered with SpaceX to double Claude Code usage limits across Pro, Max, Team, and Enterprise plans, according to PCWorld. The deal provides access to SpaceX's Colossus 1 data center featuring over 220,000 Nvidia GPUs, significantly boosting Anthropic's computing capacity. This partnership marks a surprising shift, as Elon Musk previously criticized Anthropic but recently expressed being impressed after meetings with company staff. Instead of downgrading its most affordable Claude subscription plan by dropping access to Claude Code, Anthropic has instead doubled Claude Code usage rates for subscribers, starting today. All it took was an eyebrow-raising alliance with an unlikely partner.
- North America > United States > Tennessee (0.15)
- North America > United States > California (0.15)
- Information Technology > Security & Privacy (0.73)
- Leisure & Entertainment > Games > Computer Games (0.54)
- Information Technology > Hardware (0.52)
Anthropic Gets in Bed With SpaceX as the AI Race Turns Weird
In an unexpected turn, the two companies signed a deal for Anthropic to use computing resources from Elon Musk's xAI. Anthropic and Elon Musk's SpaceX said on Wednesday that the two entities have signed an agreement for Anthropic to use computing resources from xAI's data center in Memphis, Tennessee. It's the latest tie up in an industry that is scrambling to find enough computers to run complex AI software. SpaceX and xAI were previously separate companies, but the two merged earlier this year. The combined entity, also owned by Musk, is called SpaceXAI.
- Press Release (0.55)
- Research Report (0.49)
- Aerospace & Defense (0.84)
- Information Technology > Services (0.72)
Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows
New research suggests that reliance on AI assistants can have a negative impact on people's ability to think and problem solve. Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people's ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA. Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously.
- North America > United States (0.29)
- Europe (0.29)
- Information Technology > Security & Privacy (0.49)
- Government > Military (0.48)
I Am Begging AI Companies to Stop Naming Features After Human Processes
Anthropic announced "dreaming" for AI agents to sort through "memories" at its developer conference. Anthropic just announced a new feature called "dreaming" at the company's developer conference in San Francisco. It's part of Anthropic's recently launched AI agent infrastructure designed to help users manage and deploy tools that automate software processes. This "dreaming" aspect sorts through the transcript of what an agent recently completed and attempts to glean insights to improve the agent's performance. Folks using AI agents often send them on multistep journeys, like visiting a few websites or reading multiple files, to complete online tasks.
- Law (1.00)
- Information Technology > Services (0.48)
Anthropic reportedly agrees to pay Google 200 billion for chips and cloud access
We learned earlier this month that Google and Anthropic had inked a deal that would grant the creator of the Claude AI models access to cloud servers and chips. Today, reported that Anthropic has agreed to pay a staggering $200 billion to Google over the next five years. Contracts like this, or Anthropic's other recent multi-billion dollar arrangement with Amazon, now account for a ludicrous amount of money promised to some of the world's largest tech companies. These cloud service providers have been early investors in the AI boom, gambling that the startups' need for their resources as they grow would yield lucrative dividends. So far, they've been correct.
- Information Technology (1.00)
- Leisure & Entertainment > Games > Computer Games (0.79)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Cloud Computing (0.75)
- Information Technology > Communications > Mobile (0.57)
- Information Technology > Communications > Social Media (0.45)
US to safety test new AI models from Google, Microsoft, xAI
New artificial intelligence (AI) tools and capabilities from Google, Microsoft and xAI will now be tested by the US Department of Commerce before they are released to the public. The tech firms have agreed to voluntarily submit their models for testing through Commerce's Center for AI Standards and Innovation (CAISI). The new pacts are an expansion on agreements by AI companies like OpenAI and Anthropic that were reached during the Biden Administration, and will see AI models from all of the companies evaluated for their capabilities and security. These expanded industry collaborations help us scale our work in the public interest at a critical moment, CAISI's director Chris Fall said. Overall, the evaluations of the AI tools will cover testing, collaborative research and best practice development related to commercial AI systems.
- North America > United States (1.00)
- Europe (1.00)
Microsoft, Google, xAI give US access to AI models for security testing
Tech giants Microsoft, Google and xAI say they will allow the United States federal government access to their new artificial intelligence models for national security testing. The Center for AI Standards and Innovation (CAISI) at the Department of Commerce announced the agreement on Tuesday amid increasing concerns about the capabilities that Anthropic's newly unveiled Mythos model could give hackers. The agreement fulfils a pledge the administration of US President Donald Trump made in July to partner with technology companies to vet their AI models for "national security risks". Microsoft will work with US government scientists to test AI systems "in ways that probe unexpected behaviors", the company said in a statement. Together they will develop shared data sets and workflows for testing the company's models, the company said.
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Backlash builds over NHS plan to hide source code from AI hacking risk
NHS England is pulling its open-source software from the internet because of fears around computer-hacking AI models like Mythos. A decision by NHS England to withdraw open-source code created with UK taxpayer funds because of the risk posed by computer-hacking AI models is attracting growing backlash. Last month, Mythos, an AI created by technology firm Anthropic, was widely reported to be capable of discovering flaws in virtually any software, potentially allowing hackers to break into systems running it. NHS England has now told staff that existing and future software must be pulled from public view and kept behind closed doors by 11 May because of this risk. The decision goes against the NHS service standard, which requires that staff make any software they produce open-source so that tools can be built upon, improved and used without the need for duplicated effort.
- Information Technology > Software (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
AI chatbot fraud: the 'gift card' subcription that may cost you dear
Some users view AI chatbots as indispensable for helping run their affairs. But it can come at a cost. Some users view AI chatbots as indispensable for helping run their affairs. But it can come at a cost. AI chatbot fraud: the'gift card' subcription that may cost you dear After subscribing to the Claude chatbot, mystery payments started to appear on one family's credit card bill.
- Leisure & Entertainment > Sports (0.73)
- Retail (0.64)
- Banking & Finance (0.55)