Artificial Intelligence Processing Moving from Cloud to Edge
The recent rise of artificial intelligence (AI) can be partly attributed to improvements in graphics processing unit (GPU) processors, mostly deployed in cloud server architectures. GPUs are massively parallel processors that can map well to the large number of vector and matrix multiplication calculations that need to be performed in deep learning. GPUs were originally designed to perform matrix multiplication operations for three-dimensional (3D) computer graphics, but it turns out that deep learning applications have similar requirements, and GPUs have been successful in accelerating the training and inference of AI algorithms. Hyperscalar internet companies, including Google, Facebook, Amazon, and Microsoft, have built massive cloud server farms that can perform industrial-scale training and inference operations for AI, fueled by the troves of consumer data they collect, further improving their AI algorithms. NVIDIA has been the main beneficiary of this trend, as its GPUs power the majority of these cloud-based AI data centers.
Jun-10-2017, 05:20:15 GMT
- Country:
- North America > United States > Massachusetts (0.05)
- Industry:
- Information Technology > Services (0.97)
- Technology: