Collaborating Authors

What are neural processors?


Deep neural networks (DNNs) are powering the revolution in machine learning that is driving autonomous vehicles, and many other real-time data analysis tasks. The two most popular DNNs are convolutional -- for feature recognition -- and recurrent -- for time series analysis. DNNs need to be trained on massive tagged datasets to develop a model - basically a matrix of feature weights - that can then be run on local hardware. When a trained neural network classifies or estimates various values, the process is called inference. Artificial intelligence in the real world: What can it actually do?

Intel launches Xeon E7 V4 processors, eyes analytics workloads


Intel has launched its latest Xeon processors and has positioned them at the core of enterprise analytics. What remains to be seen is whether Intel's latest chips can bolster a waning server market. Patrick Buddenbaum, general manager of Intel's Enterprise IT Solutions, Data Center Group, said the company's Xeon Processor E7 V4 family is aimed at advancing real-time analytics and mission critical computing for a host of industries. The general theme is that analytics is everywhere and embedded in every function--marketing, finance, sales, IT, customer service, manufacturing, supply chain etc. Intel's core pitch is that its new Xeons can turn data into insights faster. The catch is that analytics are being built into multiple software services such as Salesforce and Workday.

AWS brings EC2 C5a instances, powered by AMD, into general availability


Amazon Web Services is making cloud instances powered by AMD's Epyc Rome chips generally available. The Elastic Compute Cloud (EC2) C5a instances, powered by the 2nd Gen AMD Epyc processors, offer the lowest cost per x86 virtual CPU in the Amazon EC2 portfolio. They're well-suited compute-intensive workloads that can take advantage of the 2nd Gen Epyc processor's high core counts, including video game development and hosting. Powered by a processor running at frequencies up to 3.3Ghz, the Amazon EC2 C5a instances are available in eight configurations, with up to 96 virtual CPUs. The is the sixth instance family at AWS powered by Epyc processors.

Photonics Processor Aimed at AI Inference


Silicon photonics is exhibiting greater innovation as requirements grow to enable faster, lower-power chip interconnects for traditionally power-hungry applications like AI inferencing. With that in mind, scientists at Massachusetts Institute of Technology launched a startup in 2017 called Lightmatter Inc. to develop silicon photonic processors. Another goal was leveraging optical computing to "decouple" AI processing from Moore's law scaling that according to the company founders literally produces more heat than light. Lightmatter announced an AI photonic "test chip" during this week's Hot Chips conference positioned as an AI inference accelerator using light to process and transport data. The 3D module incorporates a 12- and 90-nm ASIC, the latter supporting photonics processing steps such as laser monitoring and light distribution.

Google's dedicated TensorFlow processor, or TPU, crushes Intel, Nvidia in inference workloads - ExtremeTech


Before we hit the benchmark results, there are a few things we need to note. First, Turbo mode and GPU Boost were disabled for both the Haswell and Nvidia GPUs, not to artificially tilt the score in favor of the TPU, but because Google's data centers prioritize dense hardware packing over raw performance. Higher turbo clock rates for the v3 Xeon are dependent on not using AVX, which Google's neural networks all tend to use. As for Nvidia's K80, the test server in question deployed four K80 cards with two GPUs per card, for a total of eight GPU cores. Packed that tightly, the only way to take advantage of the GPU's boost clock without causing an overheat would have been to remove two of the K80 cards.