Why Intel Is Tweaking Xeon Phi For Deep Learning

#artificialintelligence 

If there is anything that chip giant Intel has learned over the past two decades as it has gradually climbed to dominance in processing in the datacenter, it is ironically that one size most definitely does not fit all. As the tight co-design of hardware and software continues in all parts of the IT industry, we can expect fine-grained customization for very precise – and lucrative – workloads, like data analytics and machine learning, just to name two of the hottest areas today. Software will run most efficiently on hardware that is tuned for it, although we are used to thinking of that process in a mirror image, where programmers tweak their code to take advantage of the forward-looking features a chip maker conceives of four or five years before they are etched into its transistors and delivered as a product. The competition is fierce these days, and Intel has to move fast if it is to keep its compute hegemony in the datacenter. That is why at the Intel Developer Forum in San Francisco the company put a new path on the Knights family of many-core processors that will see the company deliver a version of this chip specifically tuned for machine learning workloads.