Goto

Collaborating Authors

AWS Announces Availability of New GPU Instances for Amazon EC2 - insideBIGDATA

#artificialintelligence

With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud. P2 instances allow customers to build and deploy compute-intensive applications using the CUDA parallel computing platform or the OpenCL framework without up-front capital investments. To offer the best performance for these high performance computing applications, the largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor. Two years ago, we launched G2 instances to support customers running graphics and compute-intensive applications," said Matt Garman, Vice President, Amazon EC2.


AWS Announces Availability of P2 Instances for Amazon EC2

@machinelearnbot

With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud. "The massive parallel floating point performance of Amazon EC2 P2 instances, combined with up to 64 vCPUs and 732 GB host memory, will enable customers to realize results faster and process larger datasets than was previously possible." P2 instances allow customers to build and deploy compute-intensive applications using the CUDA parallel computing platform or the OpenCL framework without up-front capital investments. To offer the best performance for these high performance computing applications, the largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor.


Chipmakers Are Racing To Build Hardware For Artificial Intelligence

#artificialintelligence

In recent years, advanced machine learning techniques have enabled computers to recognize objects in images, understand commands from spoken sentences, and translate written language. But while consumer products like Apple's Siri and Google Translate might operate in real time, actually building the complex mathematical models these tools rely on can take traditional computers large amounts of time, energy, and processing power. As a result, chipmakers like Intel, graphics powerhouse Nvidia, mobile computing kingpin Qualcomm, and a number of startups are racing to develop specialized hardware to make modern deep learning significantly cheaper and faster. The importance of such chips for developing and training new AI algorithms quickly cannot be understated, according to some AI researchers. "Instead of months, it could be days," Nvidia CEO Jen-Hsun Huang said in a November earnings call, discussing the time required to train a computer to do a new task.


Chipmakers Are Racing To Build Hardware For Artificial Intelligence

#artificialintelligence

In recent years, advanced machine learning techniques have enabled computers to recognize objects in images, understand commands from spoken sentences, and translate written language. But while consumer products like Apple's Siri and Google Translate might operate in real time, actually building the complex mathematical models these tools rely on can take traditional computers large amounts of time, energy, and processing power. As a result, chipmakers like Intel, graphics powerhouse Nvidia, mobile computing kingpin Qualcomm, and a number of startups are racing to develop specialized hardware to make modern deep learning significantly cheaper and faster. The importance of such chips for developing and training new AI algorithms quickly cannot be understated, according to some AI researchers. "Instead of months, it could be days," Nvidia CEO Jen-Hsun Huang said in a November earnings call, discussing the time required to train a computer to do a new task.


How AI Accelerators Are Changing The Face Of Edge Computing

#artificialintelligence

AI has become the key driver for the adoption of edge computing. Originally, the edge computing layer was meant to deliver local compute, storage, and processing capabilities to IoT deployments. Sensitive data that cannot be sent to the cloud for processing and analysis is handled by the edge. It also reduces the latency involved in the roundtrip to the cloud. Most of the business logic that runs in the cloud is moving to the edge to provide low-latency, faster response time.