If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
High-end GPUs are all the rage right now. They are driving deep neural net training for artificial intelligence, enabling the growing PC gaming market and next-generation virtual reality, augmented reality and mixed reality applications. There are only two high-end players in the GPU space, Advanced Micro Devices's Radeon group and NVIDIA. Over the past few years, AMD has dominated the gaming console market and the lower end graphics market and shared the mid-range, while NVIDIA currently owns the datacenter and professional graphics, GPU DNN-training and the highest-end gaming graphics. AMD has gained unit graphics share the past few quarters.
While AI (artificial intelligence) has been around since the 50's, IBM was the pioneer in the latest AI cycle with their own custom solution dubbed Watson. Ever since the introduction of Watson and its ability to beat Jeopardy Champion Ken Jennings, the company has been increasing its investment in the space. IBM Watson is now an entire division of the company which indicates the importance they put on the future of AI. Watson is only one part of IBM's AI investment which I consider the "easy button" for those enterprise who don't want to create everything from scratch. IBM also has DIY (do it yourself) infrastructure for cloud providers through POWER8, OpenPOWER, OpenCAPI, designed for cloud giant rolls their own AI software. But what about enterprises who are in the middle, those who want solid infrastructure and want to invest in the latest deep neural network frameworks?
In Supercomputing Conference (SC) years past, chipmaker Intel has always come forth with a strong story, either as an enabling processor or co-processor force, or more recently, as a prime contractor for a leading-class national lab supercomputer. But outside of a few announcements at this year's SC related to beefed-up SKUs for high performance computing and Skylake plans, the real emphasis back in Portland seemed to ring far fainter for HPC and much louder for the newest server tech darlings, deep learning and machine learning. Far from the HPC crowd last week was Intel's AI Day, an event in San Francisco chock full of announcements on both the hardware and software fronts during a week that has historically emphasized Intel's revolving efforts in supercomputing. As we have noted before, there is a great deal of overlap between these two segments, so it is not fair to suggest that Intel is ditching one community for the other. In fact, it is quite the opposite--or more specifically, these areas are merging to a greater degree (and far faster) than most could have anticipated.
Nvidia's (NVDA) spectacular earnings and guidance last week provided good evidence that the GPU leader is on its way to making the powering of artificial intelligence workloads a 10-figure annual business. Since then, it hasn't wasted time announcing moves that grow its AI ecosystem and could help keep hungry rivals at bay. On Monday, Nvidia and IBM (IBM) announced the latter is rolling out a software toolkit called IBM PowerAI for IBM servers containing Nvidia's Tesla accelerator cards, which are widely used to handle a popular type of AI known as deep learning. IBM also rolled out a new server, the Power S822LC, that's optimized for AI and other high-performance computing (HPC) workloads: It pairs Big Blue's mammoth Power8 CPUs with Tesla accelerators and Nvidia's NVLink high-speed GPU interconnect. That day, Nvidia also announced it's teaming with Microsoft (MSFT) on a solution that lets businesses create AI workloads by using Microsoft's Cognitive Toolkit software on systems containing Tesla GPUs.
Today, when Intel announced a new generation of Xeon Phi server chips, the emphasis was on their ability to handle A.I. Of all those servers, 7 percent were handling deep learning, while 95 percent were doing machine learning, she said. Of servers doing machine learning or deep learning, "the vast, vast majority of workloads are machine learning. They offer "advanced acceleration capabilities" for workloads like Google's TensorFlow deep learning framework, Google has said.
From ISC 2016 in Frankfurt, Germany, this week, Intel Corp. launched the second-generation Xeon Phi product family, formerly code-named Knights Landing, aimed at HPC and machine learning workloads. "We're not just a specialized programming model," said Intel's General Manager, HPC Compute and Networking, Barry Davis in a hand-on technical demo held at ISC. "Knights Landing" also puts integrated on-package memory in a processor, which benefits memory bandwidth and overall application performance. The Pascal P100 GPU for NVLink-optimized servers offers 5.3 teraflops of double-precision floating point performance, and the PCIe version supports 4.7 teraflops of double-precision.
There are over 410 GPU accelerated HPC applications, over 300,000 CUDA developers, and we accelerate all of the deep learning frameworks as well," Buck says, pointing the data above to highlight their role in all of the most prevalent platforms to date. Yet another question one might ask during the AI-laden HPC talks this week is where deep learning and machine learning might fit in HPC workflows. It is Nvidia's goal to take that expertise in high performance computing and move it into deep learning and artificial intelligence--and the company expects that blend of those two worlds will be key to boosting next-generation AI applications. In essence, these can work exactly like a K80, except they have more memory bandwidth (model depending) slightly less memory, and about 6.5% more single precision floating point capability (and 1.6X double precision floating point) plus support for FP16.
Romit Shah, of Japanese financial holding company Nomura, has raised his rating on the shares in silicon specialist Nvidia, after spending time with CEO Jen-Hsun Huang and hearing about the company's plans for the future. Nvidia's year-on-year data center revenues grew by 63 percent last quarter, mostly due to the "broad adoption" of Tesla M40 GPU accelerator. Nvidia claims that for machine learning workloads, the accelerator can deliver eight times more compute than a traditional CPU. Speaking to analysts earlier this year, Nvidia's CEO shared his views on the importance of machine learning: "In terms of how big that is going be, my sense is at almost no transaction with the Internet will be without deep learning or some machine learning inference in the future.
If you want to get under Diane Bryant's skin these days, just ask her about GPUs. The head of Intel's data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning. Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It's key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning. "It's a big opportunity, and there will be a hockey stick where every business will be using machine learning," she said in an interview.
If you want to get under Diane Bryant's skin these days, just ask her about GPUs. The head of Intel's powerful data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning. Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It's key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning. "It's a big opportunity, and there will be a hockey stick where every business will be using machine learning," she said in an interview.