Element AI -- a Montreal-based platform and incubator that wants to be the go-to place for any and all companies (big or small) that are building or want to include AI solutions in their businesses, but lack the talent and other resources to get started -- is announcing a mammoth Series A round of $102 million. They include Fidelity Investments Canada, Korea's Hanwha, Intel Capital, Microsoft Ventures, National Bank of Canada, NVIDIA, Real Ventures, and "several of the world's largest sovereign wealth funds." But the basic model is not: Element AI is tackling this problem essentially by leaning on trends in outsourcing: systems integrators, business process outsourcers, and others have built multi-billion dollar businesses by providing consultancy or even fully taking the reins on projects that businesses do not consider their core competency. Element AI says that initial products that can be picked up there include predictive modeling, forecasting models for small data sets, conversational AI and natural language processing, image recognition and automatic tagging of attributes based on images, 'aggregation techniques' based on machine learning, reinforcement learning for physics-based motion control, compression of time-series data, statistical machine learning algorithms, voice recognition, recommendation systems, fluid simulation, consumer engagement optimization and computational advertising.
I am really excited to announce that the general availability of the Azure N-Series will be December 1st, 2016. Azure N-Series virtual machines are powered by NVIDIA GPUs and provide customers and developers access to industry-leading accelerated computing and visualization experiences. I am also excited to announce global access to the sizes, with N-series available in South Central US, East US, West Europe and South East Asia, all available on December 1st. We've had thousands of customers participate in the N-Series preview since we launched it back in August. We've heard positive feedback on the enhanced performance and the work we have down with NVIDIA to make this a completely turnkey experience for you.
On Monday, IBM announced that it collaborated with Nvidia to provide a complete package for customers wanting to jump right into the deep learning market without all the hassles of determining and setting up the perfect combination of hardware and software. The company also revealed that a cloud-based model is available as well that eliminates the need to install local hardware and software. To trace this project, we have to jump back to September when IBM launched a new series of "OpenPower" servers that rely on the company's Power8 processor. The launch was notable because this chip features integrated NVLink technology, a proprietary communications link created by Nvidia that directly connects the central processor to a Nvidia-based graphics processor, namely the Tesla P100 in this case. Server-focused x86 processors provided by Intel and AMD don't have this type of integrated connectivity between the CPU and GPU.
Large Internet companies are using it to roll out online services that understand images and speech, and deep-learning chips are being designed into drones, driverless cars, and other products in the much-ballyhooed "Internet of things." But it has taken a commanding lead in the nascent deep-learning market since big Internet companies discovered how well graphics chips could handle AI-related jobs. Qualcomm is introducing software tools to help customers use its mobile chips for deep learning. Knupath, which was started by former NASA chief Dan Goldin, announced an AI chip called Hermosa in June, along with software to link up 512,000 Hermosas and other chips.
Large Internet companies are using it to roll out online services that understand images and speech, and deep-learning chips are being designed into drones, driverless cars, and other products in the much-ballyhooed "Internet of things." But it has taken a commanding lead in the nascent deep-learning market since big Internet companies discovered how well graphics chips could handle AI-related jobs. Knupath, which was started by former NASA chief Dan Goldin, announced an AI chip called Hermosa in June, along with software to link up 512,000 Hermosas and other chips. DJI, the world's largest drone maker, designed a "visual processing unit" made by Movidius into its new Phantom 4 model.
As GPU maker Nvidia's CEO stressed at this year's GPU Technology Conference, deep learning is a target market, fed in part by a new range of their GPUs for training and executing deep neural networks, including the Tesla M40, M4, the existing supercomputing-focused K80, and now, the P100 (Nvidia's latest Pascal processor, which is at the heart of a new appliance specifically designed for deep learning workloads). While cloud rival Amazon Web Services, among others, are sporting GPU cards for high performance computing (HPC) and deep learning users, the partnership between Nvidia and IBM is giving Big Blue a leg up in terms of making a wider array of GPUs available to suit different workloads. Today that suite of GPU options was enriched with the addition of the virtualization-ready Nvidia M60 cards, which can support a wider range of workloads--from HPC applications, to machine learning workloads, to virtual services and gaming platforms. As our own Timothy Prickett Morgan noted earlier this year, at the moment, Nvidia identifies six cloud providers that provide cloud-based GPU capacity or hosted GPU capacity.
Huang said deep learning will be the basis for the entire computer industry, including data centers and the cloud, for years to come. Huang also said he believes AI and deep learning will transform data centers and cloud services. Rajat Monga, a Google technical lead and manager of TensorFlow, an open source software library for machine learning that was developed at Google, said the company thinks deep learning will infuse every Google service, including new areas such as robotics. It's what he called the world's first car computing platform powered by deep learning.
And if you do the math against 250 servers with just CPUs or even four or eight Pascal box, a lot of customers are fine with a two GPU or four GPU box, and for these customers, the Tesla M40 and Tesla K80 make a lot of sense to keep going. While Nvidia shipped a very powerful and energy efficient Maxwell-based Tesla that was aimed at certain kinds of single precision workloads – machine learning, seismic processing, genomics, signal processing, and video encoding all work perfectly fine at single precision and do not more work at double precision – it never did get a Maxwell chip into the field in a Tesla form factor that had more double precision floating point performance than the Tesla K40. But for customers doing machine learning, the increase in cores and clock speeds with the Pascal GP100 GPU compared to the Maxwell GM200 plus the shift to FP16 half precision math means that this 16 GB of HMB2 memory looks and feels like 32 GB would be at FP32 single precision and that the effective performance of the device as it double pumps the CUDA cores comes in at 21.2 teraflops per device. These numbers suggest to us that, at least initially, Nvidia can charge at least twice as much for raw SP and DP performance compared to the Tesla K40s and Tesla K80s, and then add on a premium for FP16, memory bandwidth, and NVLink GPU lashing.