If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Without a doubt, 2016 was an amazing year for Machine Learning (ML) and Artificial Intelligence (AI). During the year, we saw nearly every high tech CEO claim the mantel of becoming an "AI Company". However, only a few companies were actually able to monetize their significant investments in AI, notably,,,,,, and . But 2016 was nonetheless a year of many firsts. As a posterchild for the potential for ML, Google Deep Mind mastered the subtle and infinitely complex game of GO, soundly beating the reigning world champion.
In Supercomputing Conference (SC) years past, chipmaker Intel has always come forth with a strong story, either as an enabling processor or co-processor force, or more recently, as a prime contractor for a leading-class national lab supercomputer. But outside of a few announcements at this year's SC related to beefed-up SKUs for high performance computing and Skylake plans, the real emphasis back in Portland seemed to ring far fainter for HPC and much louder for the newest server tech darlings, deep learning and machine learning. Far from the HPC crowd last week was Intel's AI Day, an event in San Francisco chock full of announcements on both the hardware and software fronts during a week that has historically emphasized Intel's revolving efforts in supercomputing. As we have noted before, there is a great deal of overlap between these two segments, so it is not fair to suggest that Intel is ditching one community for the other. In fact, it is quite the opposite--or more specifically, these areas are merging to a greater degree (and far faster) than most could have anticipated.
At Supercomputer 16 (SC16) IBM and NVIDIA have announced what they call the fastest deep learning enterprise solution. The system is based on IBM Power System S822LC platforms that were announced in September. These systems contain the latest version of the IBM POWER8 processor that has NVIDIA NVLink embedded in it. IBM has also released a new deep learning toolkit called IBM PowerAI. The solution is capable of running AlexNet with Caffe up to 2x faster than equivalent systems.
Today, when Intel announced a new generation of Xeon Phi server chips, the emphasis was on their ability to handle A.I. Of all those servers, 7 percent were handling deep learning, while 95 percent were doing machine learning, she said. Of servers doing machine learning or deep learning, "the vast, vast majority of workloads are machine learning. They offer "advanced acceleration capabilities" for workloads like Google's TensorFlow deep learning framework, Google has said.
Today, Intel announced that its Xeon Phi processors are finally available to customers. The Xeon Phi processors feature double-precision performance in excess of 3 teraflops along with 8 teraflops of single-precision performance. All Xeon Phi processors incorporate 16GB of on-package MCDRAM memory, which Intel says is five times more power efficient as GDDR5 and offers 500GB/s of sustained memory bandwidth. According to Intel, it has shipped "tens of thousands" of its previous generation "Knight's Corner" Xeon Phi processors to date, and expects the number of Xeon Phi processors sold to increase dramatically this year to over 100,000.
Shah, who works Japanese financial holding company Nomura, seemed to be rather enthusiast about Nvidia's data centre business after his chat. Huang apparently was animate about the prospects for the data centre business, as hyperscale companies quickly adopt throughput computing in an effort to accelerate workload performance. Nvidia's year-on-year data center revenues grew by 63 percent last quarter, mostly due to the "broad adoption" of Tesla M40 GPU accelerator. One product name dropped included the Tesla M40 GPU was designed for machine learning, and features 3,072 CUDA cores and 12GB of GDDR5 memory, with up to seven Teraflops of single-precision performance.
Romit Shah, of Japanese financial holding company Nomura, has raised his rating on the shares in silicon specialist Nvidia, after spending time with CEO Jen-Hsun Huang and hearing about the company's plans for the future. Nvidia's year-on-year data center revenues grew by 63 percent last quarter, mostly due to the "broad adoption" of Tesla M40 GPU accelerator. Nvidia claims that for machine learning workloads, the accelerator can deliver eight times more compute than a traditional CPU. Speaking to analysts earlier this year, Nvidia's CEO shared his views on the importance of machine learning: "In terms of how big that is going be, my sense is at almost no transaction with the Internet will be without deep learning or some machine learning inference in the future.
If you want to get under Diane Bryant's skin these days, just ask her about GPUs. The head of Intel's data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning. Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It's key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning. "It's a big opportunity, and there will be a hockey stick where every business will be using machine learning," she said in an interview.
If you want to get under Diane Bryant's skin these days, just ask her about GPUs. The head of Intel's powerful data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning. Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It's key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning. "It's a big opportunity, and there will be a hockey stick where every business will be using machine learning," she said in an interview.
The Nvidia datacenter business had a bit of a lull in Nvidia's second and third quarters of fiscal 2016, which correspond roughly to the point in the "Maxwell" GPU product cycle last year when Tesla customers had perhaps been expecting a Maxwell kicker to the Tesla K40 and K80 accelerators with double precision math and didn't get them. Add these Tesla accelerator sales to revenues derived from GRID virtual visualization platforms, and Nvidia's datacenter sales in its fiscal Q1 rose by 63 percent to 143 million, which we think was substantially impacted by Tesla P100 accelerator sales to the biggest hyperscalers in the world who had first dibs. With the GRID virtual GPU platforms and the high-end Quadro graphics tools, Nvidia has a sizeable enterprise business now. The new GeForce GTX 1080 graphics card, which will be available on May 27, is based on the Pascal GP104 GPU chip, which has 40 SM compute blocks with 2,560 cores in total running at 1.61 GHz and GPUBoost to 1.73 GHz, yielding 9 teraflops of single-precision performance for a suggest retail price of 599.