For certain classes of problems in high-performance computing, all supercomputers have an unavoidable, and fatal bottleneck: memory bandwidth. That is the argument made this week by one startup company at the SC20 supercomputing conference, which usually happens in San Diego but is happening this week virtually. The company making that argument is Cerebras Systems, the AI computer maker that contends its machine can achieve speed in solving problems that no existing system can. "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," Cerebras's CEO, Andrew Feldman, told ZDNet in an interview by Zoom. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors," he added.
Cerebras executive Sean Lie described the company's forthcoming second-generation AI chip at the Hot Chips conference, which is taking place virtually this year. Cerebras Systems, the Los Altos, California startup that a year ago unveiled the biggest chip ever seen, this afternoon gave a preview of its second-generation chip. The second-gen WSE, or "wafer-scale engine," chip, currently "running in our labs," will offer 850,000 individual compute cores in a chip that takes up almost the entire surface of a traditional silicon wafer, according to Cerebras executive Sean Lie. Lie was addressing the audience of Hot Chips, a computer chip conference taking place virtually this year. The processor has 2.6 trillion transistors in total, and it is manufactured by Taiwan Semiconductor in the company's 7-nanometer fabrication process.
WIRE)--Cerebras Systems, a startup dedicated to accelerating Artificial intelligence (AI) compute, today unveiled the largest chip ever built. Optimized for AI work, the Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors and is 46,225 square millimeters. The WSE is 56.7 times larger than the largest graphics processing unit which measures 815 square millimeters and 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. In AI, chip size is profoundly important.
There are a host of different AI-related solutions for the data center, ranging from add-in cards to dedicated servers, like the Nvidia DGX-2. But a startup called Cerebras Systems has its own server offering that relies on a single massive processor rather than a slew of small ones working in parallel. Cerebras has taken the wraps off its Wafer Scale Engine (WSE), an AI chip that measures 8.46x8.46 A typical CPU or GPU is about the size of a postage stamp. Cerebras won't sell the chips to ODMs due to the challenges of building and cooling such a massive chip.