Results


IBM and Nvidia make deep learning easy for AI service creators with a new bundle

#artificialintelligence

On Monday, IBM announced that it collaborated with Nvidia to provide a complete package for customers wanting to jump right into the deep learning market without all the hassles of determining and setting up the perfect combination of hardware and software. The company also revealed that a cloud-based model is available as well that eliminates the need to install local hardware and software. To trace this project, we have to jump back to September when IBM launched a new series of "OpenPower" servers that rely on the company's Power8 processor. The launch was notable because this chip features integrated NVLink technology, a proprietary communications link created by Nvidia that directly connects the central processor to a Nvidia-based graphics processor, namely the Tesla P100 in this case. Server-focused x86 processors provided by Intel and AMD don't have this type of integrated connectivity between the CPU and GPU.


Amazon Gets Serious About GPU Compute On Clouds

#artificialintelligence

In the public cloud business, scale is everything – hyper, in fact – and having too many different kinds of compute, storage, or networking makes support more complex and investment in infrastructure more costly. We have estimated the single precision (SP) and double precision (DP) floating point performance of the GRID K520 card, and the G2 instances have either one or four of these fired up with an appropriate amount of CPU to back them. The P2 instances deliver a lot better bang for the buck, particularly on double precision floating point work. For single precision floating point, the price drop per teraflops is only around 22 percent from the G2 instances to the P2 instances for single precision work, but the compute density of the node has gone up by a factor of 7.1X and the GPU memory capacity has gone up by a factor of 12X within a single node, which doesn't affect users all that much directly but does help Amazon provide GPU processing at a lower cost because it takes fewer servers and GPUs to deliver a chunk of teraflops.


Intel Outside as Other Companies Prosper from AI Chips

#artificialintelligence

Large Internet companies are using it to roll out online services that understand images and speech, and deep-learning chips are being designed into drones, driverless cars, and other products in the much-ballyhooed "Internet of things." But it has taken a commanding lead in the nascent deep-learning market since big Internet companies discovered how well graphics chips could handle AI-related jobs. Qualcomm is introducing software tools to help customers use its mobile chips for deep learning. Knupath, which was started by former NASA chief Dan Goldin, announced an AI chip called Hermosa in June, along with software to link up 512,000 Hermosas and other chips.


Intel Outside as Other Companies Prosper from AI Chips

#artificialintelligence

Large Internet companies are using it to roll out online services that understand images and speech, and deep-learning chips are being designed into drones, driverless cars, and other products in the much-ballyhooed "Internet of things." But it has taken a commanding lead in the nascent deep-learning market since big Internet companies discovered how well graphics chips could handle AI-related jobs. Knupath, which was started by former NASA chief Dan Goldin, announced an AI chip called Hermosa in June, along with software to link up 512,000 Hermosas and other chips. DJI, the world's largest drone maker, designed a "visual processing unit" made by Movidius into its new Phantom 4 model.


Intel's data center chief talks about machine learning without GPUs

@machinelearnbot

If you want to get under Diane Bryant's skin these days, just ask her about GPUs. The head of Intel's data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning. Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It's key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning. "It's a big opportunity, and there will be a hockey stick where every business will be using machine learning," she said in an interview.


Intel's data center chief talks machine learning -- just don't ask about GPUs

PCWorld

If you want to get under Diane Bryant's skin these days, just ask her about GPUs. The head of Intel's powerful data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning. Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It's key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning. "It's a big opportunity, and there will be a hockey stick where every business will be using machine learning," she said in an interview.


IBM Extends GPU Cloud Capabilities, Targets Machine Learning

#artificialintelligence

As GPU maker Nvidia's CEO stressed at this year's GPU Technology Conference, deep learning is a target market, fed in part by a new range of their GPUs for training and executing deep neural networks, including the Tesla M40, M4, the existing supercomputing-focused K80, and now, the P100 (Nvidia's latest Pascal processor, which is at the heart of a new appliance specifically designed for deep learning workloads). While cloud rival Amazon Web Services, among others, are sporting GPU cards for high performance computing (HPC) and deep learning users, the partnership between Nvidia and IBM is giving Big Blue a leg up in terms of making a wider array of GPUs available to suit different workloads. Today that suite of GPU options was enriched with the addition of the virtualization-ready Nvidia M60 cards, which can support a wider range of workloads--from HPC applications, to machine learning workloads, to virtual services and gaming platforms. As our own Timothy Prickett Morgan noted earlier this year, at the moment, Nvidia identifies six cloud providers that provide cloud-based GPU capacity or hosted GPU capacity.