Goto

Collaborating Authors

Cerebras prepares for the era of 120 trillion-parameter neural networks

ZDNet

Cerebras added to its previously annouced CS-2 AI computer with a new switch product, the SwarmX, that does routing but also calculations, and a memory computer containing 2.4 petabytes of DRAM and NAND, called MemoryX. Artificial intelligence in its deep learning form is producing neural networks that will have trillions and trillions of neural weights, or parameters, and the increasing scale presents special problems for the hardware and software used to develop such neural networks. "In two years, models got a thousand times bigger and they required a thousand times more compute," says Andrew Feldman, co-founder and CEO of AI system maker Cerebras Systems, summing up the recent history of neural nets in an interview with ZDNet via Zoom. "That is a tough trajectory," says Feldman. Feldman's company this week is unveiling new computers at the annual Hot Chips computer chip conference for advanced computing.


Cerebras Systems Lays The Foundation For Huge Artificial Intelligence

#artificialintelligence

OK, I thought I was done with preparing for HotChips, having prepared six blogs and three research papers. I was ready to take a few days off. Nonetheless, I decided to take a call from Andrew Feldman, CEO of Cerebras Systems. I have known and respected Andrew for over a decade and he always has exciting things to share. I'm so glad I took the call.


Cerebras Systems Unveils the Industry's First Trillion Transistor Chip

#artificialintelligence

WIRE)--Cerebras Systems, a startup dedicated to accelerating Artificial intelligence (AI) compute, today unveiled the largest chip ever built. Optimized for AI work, the Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors and is 46,225 square millimeters. The WSE is 56.7 times larger than the largest graphics processing unit which measures 815 square millimeters and 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. In AI, chip size is profoundly important.


AI chip startup Cerebras nabs $250 million Series F round at over $4 billion valuation

ZDNet

Cerebras Systems, the five-year-old AI chip startup that has created the world's largest computer chip, on Wednesday announced it has received a Series F round of $250 million led by venture capital firms Edge Capital via its Alpha Wave Ventures and Abu Dhabi Growth Fund. Returning investors participating in the round include Altimeter Capital, Benchmark Capital, Coatue Management, Eclipse Ventures, Moore Strategic Ventures, and VY Capital. The new money brings Cerebras's total raised to $750 million, and the company says it has a post-money valuation of over $4 billion. Said co-founder and CEO Andrew Feldman in prepared remarks, "The Cerebras team and our extraordinary customers have achieved incredible technological breakthroughs that are transforming AI, making possible what was previously unimaginable. See also: Cerebras prepares for the era of 120 trillion-parameter neural networks.


AI hardware pioneer Cerebras expands access in partnership with cloud vendor Cirrascale

ZDNet

The battle for artificial intelligence hardware keeps moving through phases. Three years ago, chip startups such as Habana Labs, Graphcore, and Cerebras Systems grabbed the spotlight with special semiconductors designed expressly for deep learning. Those vendors then moved on to selling whole systems, with newcomers such as SambaNova Systems starting out with that premise. Now, the action is proceeding to a new phase, where vendors are partnering with cloud operators to challenge the entrenched place of Nvidia as the vendor of choice in cloud AI. Cerebras on Thursday announced a partnership with cloud operator Cirrascale to allow users to rent capacity on Cerebras's CS-2 AI machine running in Cirrascale cloud data centers.