If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Top public cloud vendors want you to store massive data sets in their platforms to run complex Machine Learning algorithms. Apart from offering affordable compute and storage services based on pay-as-you-go pricing model, they are also luring the customers by bringing the latest GPU technology to the cloud. Why is the sudden rush in offering GPUs in the cloud? The answer is simple – It's the rise of Machine Learning. Amazon, Google, IBM, and Microsoft want you to make their cloud the preferred platform for storing, processing, analyzing, and querying data.
"We've taken it from a research tool to something that works in a production setting," according to Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit. "Microsoft Cognitive Toolkit represents tight collaboration between Microsoft and NVIDIA to bring advances to the deep learning community," said Ian Buck, general manager of the Accelerated Computing Group at NVIDIA. As expected, the new version of the toolkit will offer its customers a better and faster performance compared to its previous version. Furthermore, Microsoft recently released Windows 10 Creators Update, which will enable anyone to capture, create, and share in 3D.
This jointly optimized platform runs the new Microsoft Cognitive Toolkit (formerly CNTK) on NVIDIA GPUs, including the NVIDIA DGX-1 supercomputer, which uses Pascal architecture GPUs with NVLink interconnect technology, and on Azure N-Series virtual machines, currently in preview. Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. Certain statements in this press release including, but not limited to the impact and benefits of NVIDIA's and Microsoft's AI acceleration collaboration, Tesla GPUs, DGX-1, the Pascal architecture, NVLink interconnect technology and the Microsoft Cognitive Toolkit; the availability of Azure N-Series virtual machines; and the continuation of NVIDIA's and Microsoft's collaboration are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.
But not all companies can afford that level of resources for deep learning, so they turn to cloud services, where servers in remote data centers do the heavy lifting. But Azure uses older Nvidia GPUs, and it now has competition from Nimbix, which offers a cloud service with faster GPUs based on the Nvidia's latest Pascal architecture. Nimbix offers customers cloud services that run on Tesla P100s -- which are among Nvidia's fastest GPUs -- in IBM Power S822LC servers. Microsoft's Azure offers cloud services with servers running Nvidia's Tesla K80, which is based on the older Kepler architecture, and Tesla M40, which is based on Maxwell, a generation behind Pascal.