Results


Which GPU(s) to Get for Deep Learning

@machinelearnbot

With a good, solid GPU, one can quickly iterate over deep learning networks, and run experiments in days instead of months, hours instead of days, minutes instead of hours. Later I ventured further down the road and I developed a new 8-bit compression technique which enables you to parallelize dense or fully connected layers much more efficiently with model parallelism compared to 32-bit methods. For example if you have differently sized fully connected layers, or dropout layers the Xeon Phi is slower than the CPU. GPUs excel at problems that involve large amounts of memory due to their memory bandwidth.


Why Intel Is Tweaking Xeon Phi For Deep Learning

#artificialintelligence

The Knights Landing Xeon Phi chips, which have been shipping in volume since June, deliver a peak performance of 3.46 teraflops at double precision and 6.92 teraflops at single precision, but do not support half precision math like the Pascal GPUs do. The Pascal chips, which run at 300 watts, would still deliver better performance per watt – specifically, 70.7 gigaflops per watt compared to the hypothetical Knights Mill chip based on Knights Landing we are talking about above, which would deliver 56 gigaflops per watt. The "Knights Corner" chip from 2013 was rated at a slightly more than 2 teraflops single precision, and the Knights Landing chip from this year is rated at 6.92 teraflops single precision. Thus, we have a strong feeling that the chart above is not to scale, or that Intel showed half precision for the Knights Mill part and single precision for the Knights Corner and Knights Landing parts.


Why Intel Is Tweaking Xeon Phi For Deep Learning

#artificialintelligence

The Knights Landing Xeon Phi chips, which have been shipping in volume since June, deliver a peak performance of 3.46 teraflops at double precision and 6.92 teraflops at single precision, but do not support half precision math like the Pascal GPUs do. The Pascal chips, which run at 300 watts, would still deliver better performance per watt – specifically, 70.7 gigaflops per watt compared to the hypothetical Knights Mill chip based on Knights Landing we are talking about above, which would deliver 56 gigaflops per watt. The "Knights Corner" chip from 2013 was rated at a slightly more than 2 teraflops single precision, and the Knights Landing chip from this year is rated at 6.92 teraflops single precision. Thus, we have a strong feeling that the chart above is not to scale, or that Intel showed half precision for the Knights Mill part and single precision for the Knights Corner and Knights Landing parts.


Intel Unveils Plans for Artificial-Intelligence Chips

#artificialintelligence

The company told technology developers Wednesday that it plans next year to deliver a new version of the Xeon Phi processor--a product line previously targeted at scientific applications--with added features designed to accelerate tasks associated with what Silicon Valley calls artificial intelligence. Diane Bryant, executive vice president in charge of Intel's data center group, said Wednesday at an Intel event that the model coming out next year will handle additional instructions designed specifically for such computing jobs. "When it comes to AI, Intel's Xeon Phi is a great fit," said Jing Wang, a senior vice president at the Chinese search-engine company Baidu Inc., who joined Ms. Bryant on stage at Intel's annual developer forum in San Francisco. Besides Xeon Phi, Intel signaled a strong interest in artificial intelligence with a deal last week to buy Nervana Systems, a startup working on specialized chips and software aimed at deep learning.


Intel Unveils Plans for Artificial-Intelligence Chips

#artificialintelligence

The company told technology developers Wednesday that it plans next year to deliver a new version of the Xeon Phi processor--a product line previously targeted at scientific applications--with added features designed to accelerate tasks associated with what Silicon Valley calls artificial intelligence. Diane Bryant, executive vice president in charge of Intel's data center group, said Wednesday at an Intel event that the model coming out next year will handle additional instructions designed specifically for such computing jobs. "When it comes to AI, Intel's Xeon Phi is a great fit," said Jing Wang, a senior vice president at the Chinese search-engine company Baidu Inc., BIDU -2.07 % who joined Ms. Bryant on stage at Intel's annual developer forum in San Francisco. Besides Xeon Phi, Intel signaled a strong interest in artificial intelligence with a deal last week to buy Nervana Systems, a startup working on specialized chips and software aimed at deep learning.


Lack of a high-quality GPU may hold Intel back on A.I. and VR

#artificialintelligence

In 2009, Intel gave up on developing Larrabee, a homegrown discrete GPU, targeted at PC gaming systems. For A.I., Intel is pitching high-performance chips called Xeon Phi, which were derived from Larrabee. At IDF, the company announced a specialized Xeon Phi chip called Knights Mill for A.I. If Intel had highest-performance GPUs like AMD and Nvidia do, the company would be able to participate in a wider capacity in VR and AR, said Patrick Moorhead, principal analyst at Moor Insights and Strategy.


Intel Challenges Nvidia in Machine Learning

#artificialintelligence

At the Intel Developer Forum yesterday, the company even brought out an executive from Chinese cloud giant Baidu to talk about the Xeon Phi, Intel's machine learning chip. Intel executive vice president Diane Bryant mentioned Nervana during yesterday's keynote, but with the deal still not closed, it's understandable that she didn't articulate Intel's plans for the startup. In addition to Baidu's senior vice president Jing Wang, Bryant brought out Slater Victoroff, founder of Indico, a startup using deep learning to analyze text and images. He said he prefers the Intel model, where the host processor also runs the deep learning algorithms.


NVIDIA Cries Foul on Intel Phi AI Benchmarks

#artificialintelligence

This week saw the eruption of a vendor spat when NVIDIA, developer of GPUs widely used in the AI/machine learning market, alleged foul play against Intel in recent comparative benchmark results involving Intel's Xeon Phi processors. "With the more recent implementation of Caffe AlexNet, publicly available here, Intel would have discovered that the same system with four Maxwell GPUs delivers 30 percent faster training time than four Xeon Phi servers. In fact, a system with four Pascal-based NVIDIA TITAN X GPUs trains 90 percent faster and a single NVIDIA DGX-1 is over 5x faster than four Xeon Phi servers." In response to Intel's statement that Xeon Phi offers 38 percent better scaling than GPUs across nodes, Buck said, "Intel is comparing Caffe GoogleNet training performance on 32 Xeon Phi servers to 32 servers from Oak Ridge National Laboratory's Titan supercomputer.


Intel unveils next-generation Xeon Phi chips for A.I.

#artificialintelligence

Silicon Valley is full of chatter about artificial intelligence, deep learning neural networks, and machine learning. Baidu will use the upcoming Xeon Phi chips in the data centers it is building for its Deep Speech platform, where its networks will be able to parse natural language speech as quickly and accurately as possible. But in recent years, Nvidia's graphics chips have become a lot more useful in servers dedicated to neural networks, which can process unstructured data such as video or speech and recognize patterns more easily. Intel argues that its Xeon Phi chips will run at "comparable levels of performance" to Nvidia's graphics processing units.


NVIDIA Corrects Intel's

#artificialintelligence

NVIDIA has started marketing some of its GPUs, like the Tesla P100, more to the enterprise market for corporate data centers, and that's likely to increase as server and cloud-computing tasks become more complex. NVIDIA saw 112% year-over-year growth from its data center revenue in Q2 2017, driven in part by deep learning. Intel's latest Xeon Phi processors are certainly serious competition for NVIDIA, and will likely create very good deep-learning machines. The Motley Fool recommends Intel.