If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Top public cloud vendors want you to store massive data sets in their platforms to run complex Machine Learning algorithms. Apart from offering affordable compute and storage services based on pay-as-you-go pricing model, they are also luring the customers by bringing the latest GPU technology to the cloud. Why is the sudden rush in offering GPUs in the cloud? The answer is simple – It's the rise of Machine Learning. Amazon, Google, IBM, and Microsoft want you to make their cloud the preferred platform for storing, processing, analyzing, and querying data.
In the public cloud business, scale is everything – hyper, in fact – and having too many different kinds of compute, storage, or networking makes support more complex and investment in infrastructure more costly. We have estimated the single precision (SP) and double precision (DP) floating point performance of the GRID K520 card, and the G2 instances have either one or four of these fired up with an appropriate amount of CPU to back them. The P2 instances deliver a lot better bang for the buck, particularly on double precision floating point work. For single precision floating point, the price drop per teraflops is only around 22 percent from the G2 instances to the P2 instances for single precision work, but the compute density of the node has gone up by a factor of 7.1X and the GPU memory capacity has gone up by a factor of 12X within a single node, which doesn't affect users all that much directly but does help Amazon provide GPU processing at a lower cost because it takes fewer servers and GPUs to deliver a chunk of teraflops.