Goto

Collaborating Authors

Cerebras Systems Announces World's First Brain-Scale Artificial Intelligence Solution

#artificialintelligence

WIRE)--Cerebras Systems, the pioneer in innovative compute solutions for Artificial Intelligence (AI), today unveiled the world's first brain-scale AI solution. The human brain contains on the order of 100 trillion synapses. The largest AI hardware clusters were on the order of 1% of human brain scale, or about 1 trillion synapse equivalents, called parameters. At only a fraction of full human brain-scale, these clusters of graphics processors consume acres of space and megawatts of power, and require dedicated teams to operate. Today, Cerebras announces technology enabling a single CS-2 accelerator--the size of a dorm room refrigerator--to support models of over 120 trillion parameters in size.


Cerebras Unveils AI Supercomputer-On-A-Chip

#artificialintelligence

Today unicorn startup Cerebras disclosed a few details about the wafer-scale AI chip it has been keeping under wraps for some three years. While many unanswered questions remain, the new approach could mark a significant milestone in the semiconductor industry, where chips have historically been constrained by the size of a single chip's mask. Basically, Cerebras designed a wafer of 84 interconnected chips that act as one device for compute and memory, interconnected by a super-fast on-die fabric. While building a supercomputer on a chip sounds like a great idea, building a wafer-scale array of chips is not for the faint of heart or talent. Figure 1: The Cerebras "WSE" is some 50 times larger than the largest chip used in AI Training from NVIDIA.


Cerebras Systems Unveils the Industry's First Trillion Transistor Chip

#artificialintelligence

WIRE)--Cerebras Systems, a startup dedicated to accelerating Artificial intelligence (AI) compute, today unveiled the largest chip ever built. Optimized for AI work, the Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors and is 46,225 square millimeters. The WSE is 56.7 times larger than the largest graphics processing unit which measures 815 square millimeters and 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. In AI, chip size is profoundly important.


Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine

#artificialintelligence

Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras Systems is announcing its second-generation technology (WSE-2), which its says packs twice the performance into the same 8″x8″ silicon footprint. "We're going bigger, faster and better in a more power efficient footprint," Cerebras Founder and CTO Andrew Feldman told HPCwire ahead of today's launch. With 2.6 trillion transistors and 850,000 cores, the WSE-2 more than doubles the elements on the first-gen chip (1.2 trillion transistors, 400,000 cores). The new chip, made by TSMC on its 7nm node, delivers 40 GB of on-chip SRAM memory, 20 petabytes of memory bandwidth and 220 petabits of aggregate fabric bandwidth. Gen over gen, the WSE-2 provides about 2.3X on all major performance metrics, said Feldman.


The five technical challenges Cerebras overcame in building the first trillion-transistor chip – TechCrunch

#artificialintelligence

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today -- and it is a doozy. The "Wafer Scale Engine" is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative). Cerebras' Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems). It's made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry's big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees.