Results


Intel Proclaims Machine Learning Nervana

#artificialintelligence

In a blog post today, Intel (NASDAQ:INTC) CEO Brian Krzanich announced the Nervana Neural Network Processor (NNP). The Intel Nervana NNP promises to revolutionize AI computing across myriad industries. Using Intel Nervana technology, companies will be able to develop entirely new classes of AI applications that maximize the amount of data processed and enable customers to find greater insights – transforming their businesses... We have multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models. This puts us on track to exceed the goal we set last year of achieving 100 times greater AI performance by 2020.


Which GPU(s) to Get for Deep Learning

@machinelearnbot

With a good, solid GPU, one can quickly iterate over deep learning networks, and run experiments in days instead of months, hours instead of days, minutes instead of hours. Later I ventured further down the road and I developed a new 8-bit compression technique which enables you to parallelize dense or fully connected layers much more efficiently with model parallelism compared to 32-bit methods. For example if you have differently sized fully connected layers, or dropout layers the Xeon Phi is slower than the CPU. GPUs excel at problems that involve large amounts of memory due to their memory bandwidth.


Which GPU(s) to Get for Deep Learning

#artificialintelligence

Deep learning is a field with intense computational requirements and the choice of your GPU will fundamentally determine your deep learning experience. With no GPU this might look like months of waiting for an experiment to finish, or running an experiment for a day or more only to see that the chosen parameters were off. With a good, solid GPU, one can quickly iterate over deep learning networks, and run experiments in days instead of months, hours instead of days, minutes instead of hours. So making the right choice when it comes to buying a GPU is critical. So how do you select the GPU which is right for you?


Why Intel Is Tweaking Xeon Phi For Deep Learning

#artificialintelligence

The Knights Landing Xeon Phi chips, which have been shipping in volume since June, deliver a peak performance of 3.46 teraflops at double precision and 6.92 teraflops at single precision, but do not support half precision math like the Pascal GPUs do. The Pascal chips, which run at 300 watts, would still deliver better performance per watt – specifically, 70.7 gigaflops per watt compared to the hypothetical Knights Mill chip based on Knights Landing we are talking about above, which would deliver 56 gigaflops per watt. The "Knights Corner" chip from 2013 was rated at a slightly more than 2 teraflops single precision, and the Knights Landing chip from this year is rated at 6.92 teraflops single precision. Thus, we have a strong feeling that the chart above is not to scale, or that Intel showed half precision for the Knights Mill part and single precision for the Knights Corner and Knights Landing parts.


Why Intel Is Tweaking Xeon Phi For Deep Learning

#artificialintelligence

The Knights Landing Xeon Phi chips, which have been shipping in volume since June, deliver a peak performance of 3.46 teraflops at double precision and 6.92 teraflops at single precision, but do not support half precision math like the Pascal GPUs do. The Pascal chips, which run at 300 watts, would still deliver better performance per watt – specifically, 70.7 gigaflops per watt compared to the hypothetical Knights Mill chip based on Knights Landing we are talking about above, which would deliver 56 gigaflops per watt. The "Knights Corner" chip from 2013 was rated at a slightly more than 2 teraflops single precision, and the Knights Landing chip from this year is rated at 6.92 teraflops single precision. Thus, we have a strong feeling that the chart above is not to scale, or that Intel showed half precision for the Knights Mill part and single precision for the Knights Corner and Knights Landing parts.


Intel Unveils Plans for Artificial-Intelligence Chips

#artificialintelligence

The company told technology developers Wednesday that it plans next year to deliver a new version of the Xeon Phi processor--a product line previously targeted at scientific applications--with added features designed to accelerate tasks associated with what Silicon Valley calls artificial intelligence. Diane Bryant, executive vice president in charge of Intel's data center group, said Wednesday at an Intel event that the model coming out next year will handle additional instructions designed specifically for such computing jobs. "When it comes to AI, Intel's Xeon Phi is a great fit," said Jing Wang, a senior vice president at the Chinese search-engine company Baidu Inc., who joined Ms. Bryant on stage at Intel's annual developer forum in San Francisco. Besides Xeon Phi, Intel signaled a strong interest in artificial intelligence with a deal last week to buy Nervana Systems, a startup working on specialized chips and software aimed at deep learning.


Intel Unveils Plans for Artificial-Intelligence Chips

#artificialintelligence

The company told technology developers Wednesday that it plans next year to deliver a new version of the Xeon Phi processor--a product line previously targeted at scientific applications--with added features designed to accelerate tasks associated with what Silicon Valley calls artificial intelligence. Diane Bryant, executive vice president in charge of Intel's data center group, said Wednesday at an Intel event that the model coming out next year will handle additional instructions designed specifically for such computing jobs. "When it comes to AI, Intel's Xeon Phi is a great fit," said Jing Wang, a senior vice president at the Chinese search-engine company Baidu Inc., BIDU -2.07 % who joined Ms. Bryant on stage at Intel's annual developer forum in San Francisco. Besides Xeon Phi, Intel signaled a strong interest in artificial intelligence with a deal last week to buy Nervana Systems, a startup working on specialized chips and software aimed at deep learning.


Lack of a high-quality GPU may hold Intel back on A.I. and VR

#artificialintelligence

In 2009, Intel gave up on developing Larrabee, a homegrown discrete GPU, targeted at PC gaming systems. For A.I., Intel is pitching high-performance chips called Xeon Phi, which were derived from Larrabee. At IDF, the company announced a specialized Xeon Phi chip called Knights Mill for A.I. If Intel had highest-performance GPUs like AMD and Nvidia do, the company would be able to participate in a wider capacity in VR and AR, said Patrick Moorhead, principal analyst at Moor Insights and Strategy.


Intel Challenges Nvidia in Machine Learning

#artificialintelligence

At the Intel Developer Forum yesterday, the company even brought out an executive from Chinese cloud giant Baidu to talk about the Xeon Phi, Intel's machine learning chip. Intel executive vice president Diane Bryant mentioned Nervana during yesterday's keynote, but with the deal still not closed, it's understandable that she didn't articulate Intel's plans for the startup. In addition to Baidu's senior vice president Jing Wang, Bryant brought out Slater Victoroff, founder of Indico, a startup using deep learning to analyze text and images. He said he prefers the Intel model, where the host processor also runs the deep learning algorithms.


NVIDIA Cries Foul on Intel Phi AI Benchmarks

#artificialintelligence

This week saw the eruption of a vendor spat when NVIDIA, developer of GPUs widely used in the AI/machine learning market, alleged foul play against Intel in recent comparative benchmark results involving Intel's Xeon Phi processors. "With the more recent implementation of Caffe AlexNet, publicly available here, Intel would have discovered that the same system with four Maxwell GPUs delivers 30 percent faster training time than four Xeon Phi servers. In fact, a system with four Pascal-based NVIDIA TITAN X GPUs trains 90 percent faster and a single NVIDIA DGX-1 is over 5x faster than four Xeon Phi servers." In response to Intel's statement that Xeon Phi offers 38 percent better scaling than GPUs across nodes, Buck said, "Intel is comparing Caffe GoogleNet training performance on 32 Xeon Phi servers to 32 servers from Oak Ridge National Laboratory's Titan supercomputer.