WIRE)--Cerebras Systems, the pioneer in innovative compute solutions for Artificial Intelligence (AI), today unveiled the world's first brain-scale AI solution. The human brain contains on the order of 100 trillion synapses. The largest AI hardware clusters were on the order of 1% of human brain scale, or about 1 trillion synapse equivalents, called parameters. At only a fraction of full human brain-scale, these clusters of graphics processors consume acres of space and megawatts of power, and require dedicated teams to operate. Today, Cerebras announces technology enabling a single CS-2 accelerator--the size of a dorm room refrigerator--to support models of over 120 trillion parameters in size.
Cerebras Systems, the five-year-old AI chip startup that has created the world's largest computer chip, on Wednesday announced it has received a Series F round of $250 million led by venture capital firms Edge Capital via its Alpha Wave Ventures and Abu Dhabi Growth Fund. Returning investors participating in the round include Altimeter Capital, Benchmark Capital, Coatue Management, Eclipse Ventures, Moore Strategic Ventures, and VY Capital. The new money brings Cerebras's total raised to $750 million, and the company says it has a post-money valuation of over $4 billion. Said co-founder and CEO Andrew Feldman in prepared remarks, "The Cerebras team and our extraordinary customers have achieved incredible technological breakthroughs that are transforming AI, making possible what was previously unimaginable. See also: Cerebras prepares for the era of 120 trillion-parameter neural networks.
OK, I thought I was done with preparing for HotChips, having prepared six blogs and three research papers. I was ready to take a few days off. Nonetheless, I decided to take a call from Andrew Feldman, CEO of Cerebras Systems. I have known and respected Andrew for over a decade and he always has exciting things to share. I'm so glad I took the call.
The battle for artificial intelligence hardware keeps moving through phases. Three years ago, chip startups such as Habana Labs, Graphcore, and Cerebras Systems grabbed the spotlight with special semiconductors designed expressly for deep learning. Those vendors then moved on to selling whole systems, with newcomers such as SambaNova Systems starting out with that premise. Now, the action is proceeding to a new phase, where vendors are partnering with cloud operators to challenge the entrenched place of Nvidia as the vendor of choice in cloud AI. Cerebras on Thursday announced a partnership with cloud operator Cirrascale to allow users to rent capacity on Cerebras's CS-2 AI machine running in Cirrascale cloud data centers.
For certain classes of problems in high-performance computing, all supercomputers have an unavoidable, and fatal bottleneck: memory bandwidth. That is the argument made this week by one startup company at the SC20 supercomputing conference, which usually happens in San Diego but is happening this week virtually. The company making that argument is Cerebras Systems, the AI computer maker that contends its machine can achieve speed in solving problems that no existing system can. "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," Cerebras's CEO, Andrew Feldman, told ZDNet in an interview by Zoom. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors," he added.