While some of the largest technology companies in the world are racing to figure out the next generation of machine learning-focused chips that will support devices -- whether that's data centers or edge devices -- there's a whole class of startups that are racing to get there first. That includes Cerebras Systems, one of the startups that has raised a significant amount of capital, which is looking to continue targeting next-generation machine learning operations with the hiring of Dhiraj Mallick as its Vice President of Engineering and Business Development. Prior to joining Cerebras, Mallick served as the VP of architecture and CTO of Intel's data center group. That group generated more than $5.5 billion in the second quarter this year, up from nearly $4.4 billion in the second quarter of 2017, and has generated more than $10 billion in revenue in the first half of this year. Prior to Intel, Mallick spent time at AMD and SeaMicro.
WIRE)--Cerebras Systems, a startup dedicated to accelerating Artificial intelligence (AI) compute, today unveiled the largest chip ever built. Optimized for AI work, the Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors and is 46,225 square millimeters. The WSE is 56.7 times larger than the largest graphics processing unit which measures 815 square millimeters and 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. In AI, chip size is profoundly important.
Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today -- and it is a doozy. The "Wafer Scale Engine" is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative). Cerebras' Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems). It's made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry's big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees.
There are a host of different AI-related solutions for the data center, ranging from add-in cards to dedicated servers, like the Nvidia DGX-2. But a startup called Cerebras Systems has its own server offering that relies on a single massive processor rather than a slew of small ones working in parallel. Cerebras has taken the wraps off its Wafer Scale Engine (WSE), an AI chip that measures 8.46x8.46 A typical CPU or GPU is about the size of a postage stamp. Cerebras won't sell the chips to ODMs due to the challenges of building and cooling such a massive chip.