One More Reason For Running Machine Learning Jobs In The Cloud: GPUs


Top public cloud vendors want you to store massive data sets in their platforms to run complex Machine Learning algorithms. Apart from offering affordable compute and storage services based on pay-as-you-go pricing model, they are also luring the customers by bringing the latest GPU technology to the cloud. Why is the sudden rush in offering GPUs in the cloud? The answer is simple – It's the rise of Machine Learning. Amazon, Google, IBM, and Microsoft want you to make their cloud the preferred platform for storing, processing, analyzing, and querying data.

Amazon Gets Serious About GPU Compute On Clouds


In the public cloud business, scale is everything – hyper, in fact – and having too many different kinds of compute, storage, or networking makes support more complex and investment in infrastructure more costly. We have estimated the single precision (SP) and double precision (DP) floating point performance of the GRID K520 card, and the G2 instances have either one or four of these fired up with an appropriate amount of CPU to back them. The P2 instances deliver a lot better bang for the buck, particularly on double precision floating point work. For single precision floating point, the price drop per teraflops is only around 22 percent from the G2 instances to the P2 instances for single precision work, but the compute density of the node has gone up by a factor of 7.1X and the GPU memory capacity has gone up by a factor of 12X within a single node, which doesn't affect users all that much directly but does help Amazon provide GPU processing at a lower cost because it takes fewer servers and GPUs to deliver a chunk of teraflops.