CPUs, GPUs, and Now AI Chips
If you haven't heard about the artificial intelligence (AI) machine-learning (ML) craze that uses deep neural networks (DNN) and deep learning (DL) to tackle everything from voice recognition to making self-driving cars a reality, then you probably haven't heard about Google's new Tensor Processing Unit (TPU), Intel's Lake Crest, or Knupath's Hermosa. These are just a few of the vendors looking to deliver platforms targeting neural networks. The TPU contains a large 8-bit matrix multiply unit (Figure 1). It essentially optimizes the number-crunching required by DNN; large floating-point number-crunchers need not apply. The TPU is actually a coprocessor managed by a conventional host CPU via the TPU's PCI Express interface.
Jun-11-2017, 01:55:13 GMT