Goto

Collaborating Authors

Semiconductor startup Cerebras Systems launches massive AI chip

#artificialintelligence

There are a host of different AI-related solutions for the data center, ranging from add-in cards to dedicated servers, like the Nvidia DGX-2. But a startup called Cerebras Systems has its own server offering that relies on a single massive processor rather than a slew of small ones working in parallel. Cerebras has taken the wraps off its Wafer Scale Engine (WSE), an AI chip that measures 8.46x8.46 A typical CPU or GPU is about the size of a postage stamp. Cerebras won't sell the chips to ODMs due to the challenges of building and cooling such a massive chip.


Glaxo's biology research with novel Cerebras machine shows hardware may change how AI is done

ZDNet

Artificial intelligence research experienced a renaissance in the last twenty years as a result in part of greater computing power, the rise of the graphics processing unit, or GPU. Now, novel AI computer systems may be poised to have a similarly large impact. They may change not just the speed of AI work but the kinds of experiments that are done in the field. AI is changing the entire nature of computing, and as part of an inevitable feedback loop, computing will end up changing the nature of AI. An example of that showed up this week in new work being done by GlaxoSmithKline, the big British drug maker.


Cerebras Debuts AI Supercomputer-on-a-Wafer – HPCwire – IAM Network

#artificialintelligence

Could a wafer scale silicon chip from Cerebras Systems be the first "supercomputer on a chip" worthy of the designation? Last week at Hot Chips held at.


'We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve,' startup Cerebras tells supercomputing conference

ZDNet

For certain classes of problems in high-performance computing, all supercomputers have an unavoidable, and fatal bottleneck: memory bandwidth. That is the argument made this week by one startup company at the SC20 supercomputing conference, which usually happens in San Diego but is happening this week virtually. The company making that argument is Cerebras Systems, the AI computer maker that contends its machine can achieve speed in solving problems that no existing system can. "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," Cerebras's CEO, Andrew Feldman, told ZDNet in an interview by Zoom. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors," he added.


Cerebras teases second-generation wafer-scale AI chip

ZDNet

Cerebras executive Sean Lie described the company's forthcoming second-generation AI chip at the Hot Chips conference, which is taking place virtually this year. Cerebras Systems, the Los Altos, California startup that a year ago unveiled the biggest chip ever seen, this afternoon gave a preview of its second-generation chip. The second-gen WSE, or "wafer-scale engine," chip, currently "running in our labs," will offer 850,000 individual compute cores in a chip that takes up almost the entire surface of a traditional silicon wafer, according to Cerebras executive Sean Lie. Lie was addressing the audience of Hot Chips, a computer chip conference taking place virtually this year. The processor has 2.6 trillion transistors in total, and it is manufactured by Taiwan Semiconductor in the company's 7-nanometer fabrication process.