Cerebras Debuts AI Supercomputer-on-a-Wafer – HPCwire – IAM Network

#artificialintelligence

Could a wafer scale silicon chip from Cerebras Systems be the first "supercomputer on a chip" worthy of the designation? Last week at Hot Chips held at.


Cerebras Systems Unveils the Industry's First Trillion Transistor Chip

#artificialintelligence

WIRE)--Cerebras Systems, a startup dedicated to accelerating Artificial intelligence (AI) compute, today unveiled the largest chip ever built. Optimized for AI work, the Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors and is 46,225 square millimeters. The WSE is 56.7 times larger than the largest graphics processing unit which measures 815 square millimeters and 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. In AI, chip size is profoundly important.


AI Chip startup Cerebras Systems picks up a former Intel top exec

#artificialintelligence

While some of the largest technology companies in the world are racing to figure out the next generation of machine learning-focused chips that will support devices -- whether that's data centers or edge devices -- there's a whole class of startups that are racing to get there first. That includes Cerebras Systems, one of the startups that has raised a significant amount of capital, which is looking to continue targeting next-generation machine learning operations with the hiring of Dhiraj Mallick as its Vice President of Engineering and Business Development. Prior to joining Cerebras, Mallick served as the VP of architecture and CTO of Intel's data center group. That group generated more than $5.5 billion in the second quarter this year, up from nearly $4.4 billion in the second quarter of 2017, and has generated more than $10 billion in revenue in the first half of this year. Prior to Intel, Mallick spent time at AMD and SeaMicro.


Argonne National Laboratory Deploys Cerebras CS-1, the World's Fastest Artificial Intelligence Computer

#artificialintelligence

LOS ALTOS, CALIFORNIA and LEMONT, ILLINOIS – Cerebras Systems, a company dedicated to accelerating artificial intelligence (AI) compute, and the Argonne National Laboratory, a multidisciplinary science and engineering research center, today announced that Argonne is the first national laboratory to deploy the Cerebras CS-1 system. Unveiled today at SC19, the CS-1 is the fastest AI computer system in existence and integrates the pioneering Wafer Scale Engine, the largest and fastest AI processor ever built. By removing compute as the bottleneck in AI, the CS-1 enables AI practitioners to answer more questions and explore more ideas in less time. The CS-1 delivers record-breaking performance and scale to AI compute, and its deployment across national laboratories enables the largest supercomputer sites in the world to achieve 100- to 1,000-fold improvement over existing AI accelerators. By pairing supercompute power with the CS-1's AI processing capabilities, Argonne can now accelerate research and development of deep learning models to solve science problems not achievable with existing systems.


The five technical challenges Cerebras overcame in building the first trillion-transistor chip – TechCrunch

#artificialintelligence

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today -- and it is a doozy. The "Wafer Scale Engine" is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative). Cerebras' Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems). It's made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry's big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees.