Collaborating Authors

Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine


Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras Systems is announcing its second-generation technology (WSE-2), which its says packs twice the performance into the same 8″x8″ silicon footprint. "We're going bigger, faster and better in a more power efficient footprint," Cerebras Founder and CTO Andrew Feldman told HPCwire ahead of today's launch. With 2.6 trillion transistors and 850,000 cores, the WSE-2 more than doubles the elements on the first-gen chip (1.2 trillion transistors, 400,000 cores). The new chip, made by TSMC on its 7nm node, delivers 40 GB of on-chip SRAM memory, 20 petabytes of memory bandwidth and 220 petabits of aggregate fabric bandwidth. Gen over gen, the WSE-2 provides about 2.3X on all major performance metrics, said Feldman.

'We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve,' startup Cerebras tells supercomputing conference


For certain classes of problems in high-performance computing, all supercomputers have an unavoidable, and fatal bottleneck: memory bandwidth. That is the argument made this week by one startup company at the SC20 supercomputing conference, which usually happens in San Diego but is happening this week virtually. The company making that argument is Cerebras Systems, the AI computer maker that contends its machine can achieve speed in solving problems that no existing system can. "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," Cerebras's CEO, Andrew Feldman, told ZDNet in an interview by Zoom. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors," he added.

AI chip startup Cerebras nabs $250 million Series F round at over $4 billion valuation


Cerebras Systems, the five-year-old AI chip startup that has created the world's largest computer chip, on Wednesday announced it has received a Series F round of $250 million led by venture capital firms Edge Capital via its Alpha Wave Ventures and Abu Dhabi Growth Fund. Returning investors participating in the round include Altimeter Capital, Benchmark Capital, Coatue Management, Eclipse Ventures, Moore Strategic Ventures, and VY Capital. The new money brings Cerebras's total raised to $750 million, and the company says it has a post-money valuation of over $4 billion. Said co-founder and CEO Andrew Feldman in prepared remarks, "The Cerebras team and our extraordinary customers have achieved incredible technological breakthroughs that are transforming AI, making possible what was previously unimaginable. See also: Cerebras prepares for the era of 120 trillion-parameter neural networks.

This Huge Computer Chip Could Lead to Big A.I. Advances


Tucked in the Los Altos hills near the Stanford University campus, in a low-slung bunker of offices across from a coffee shop, is a lab overflowing with blinking machines putting circuits through their paces to test for speed, the silicon equivalent of a tool and die shop. Most chips you can balance on the tip of your finger, measuring just a centimeter on a side. Something very different is emerging here. Andrew Feldman, 50, chief executive of startup Cerebras Systems, holds up both hands, bracing between them a shining slab the size of a large mouse pad, an exquisite array of interconnecting lines etched in silicon that shines a deep amber under the dull fluorescent lights. At eight and a half inches on each side, it is the biggest computer chip the world has ever seen.

Cerebras Systems Unveils the Industry's First Trillion Transistor Chip


WIRE)--Cerebras Systems, a startup dedicated to accelerating Artificial intelligence (AI) compute, today unveiled the largest chip ever built. Optimized for AI work, the Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors and is 46,225 square millimeters. The WSE is 56.7 times larger than the largest graphics processing unit which measures 815 square millimeters and 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. In AI, chip size is profoundly important.