IBM and Nvidia make deep learning easy for AI service creators with a new bundle


On Monday, IBM announced that it collaborated with Nvidia to provide a complete package for customers wanting to jump right into the deep learning market without all the hassles of determining and setting up the perfect combination of hardware and software. The company also revealed that a cloud-based model is available as well that eliminates the need to install local hardware and software. To trace this project, we have to jump back to September when IBM launched a new series of "OpenPower" servers that rely on the company's Power8 processor. The launch was notable because this chip features integrated NVLink technology, a proprietary communications link created by Nvidia that directly connects the central processor to a Nvidia-based graphics processor, namely the Tesla P100 in this case. Server-focused x86 processors provided by Intel and AMD don't have this type of integrated connectivity between the CPU and GPU.

Nvidia CEO bets on artificial intelligence as the future of computing


Huang said deep learning will be the basis for the entire computer industry, including data centers and the cloud, for years to come. Huang also said he believes AI and deep learning will transform data centers and cloud services. Rajat Monga, a Google technical lead and manager of TensorFlow, an open source software library for machine learning that was developed at Google, said the company thinks deep learning will infuse every Google service, including new areas such as robotics. It's what he called the world's first car computing platform powered by deep learning.