Goto

Collaborating Authors

Cerebras CEO talks about the big implications for machine learning in company's big chip ZDNet

#artificialintelligence

You may have heard that, on Monday, Silicon Valley startup Cerebras Systems unveiled the world's biggest chip, called the WSE, or "wafer-scale engine," pronounced "wise." It is going to be built into complete computing systems sold by Cerebras. What you may not know is that the WSE and the systems it makes possible have some fascinating implications for deep learning forms of AI, beyond merely speeding up computations. Cerebras co-founder and chief executive Andrew Feldman talked with ZDNet a bit about what changes become possible in deep learning. There are three immediate implications that can be seen in what we know of the WSE so far.


Glaxo's biology research with novel Cerebras machine shows hardware may change how AI is done

ZDNet

Artificial intelligence research experienced a renaissance in the last twenty years as a result in part of greater computing power, the rise of the graphics processing unit, or GPU. Now, novel AI computer systems may be poised to have a similarly large impact. They may change not just the speed of AI work but the kinds of experiments that are done in the field. AI is changing the entire nature of computing, and as part of an inevitable feedback loop, computing will end up changing the nature of AI. An example of that showed up this week in new work being done by GlaxoSmithKline, the big British drug maker.


Cerebras Debuts AI Supercomputer-on-a-Wafer – HPCwire – IAM Network

#artificialintelligence

Could a wafer scale silicon chip from Cerebras Systems be the first "supercomputer on a chip" worthy of the designation? Last week at Hot Chips held at.


Cerebras Systems Unveils the Industry's First Trillion Transistor Chip

#artificialintelligence

WIRE)--Cerebras Systems, a startup dedicated to accelerating Artificial intelligence (AI) compute, today unveiled the largest chip ever built. Optimized for AI work, the Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors and is 46,225 square millimeters. The WSE is 56.7 times larger than the largest graphics processing unit which measures 815 square millimeters and 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. In AI, chip size is profoundly important.


'We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve,' startup Cerebras tells supercomputing conference

ZDNet

For certain classes of problems in high-performance computing, all supercomputers have an unavoidable, and fatal bottleneck: memory bandwidth. That is the argument made this week by one startup company at the SC20 supercomputing conference, which usually happens in San Diego but is happening this week virtually. The company making that argument is Cerebras Systems, the AI computer maker that contends its machine can achieve speed in solving problems that no existing system can. "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," Cerebras's CEO, Andrew Feldman, told ZDNet in an interview by Zoom. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors," he added.