Results


IBM Power9 bulks up for AI workloads

#artificialintelligence

The latest proprietary Power servers from IBM, armed by the long-awaited IBM Power9 processors, look for relevance among next-generation enterprise workloads, but the company will need some help from its friends to take on its biggest market challenger. IBM emphasizes increased speed and bandwidth with its AC922 Power Systems to better take on high-performance computing tasks, such as building models for AI and machine learning training. The company said it plans to pursue mainstream commercial applications, such as building supply chains and medical diagnostics, but those broader-based opportunities may take longer to materialize. "Most big enterprises are doing research and development on machine learning, with some even deploying such projects in niche areas," said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. "But it will be 12 to 18 months before enterprises can even start driving serious volume in that space."


Nvidia's New AI Can Generate Mind-Blowing Fake Videos

#artificialintelligence

Previous techniques relied on massive amounts of data and has problems with training the machines to find their own patterns. Researched had a hard time dealing with mapping a low-resolution image to a corresponding high-resolution image and colorization (mapping a gray-scale image to a corresponding color image). Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs.


IBM's Power9-based AC922 system designed for AI workloads

#artificialintelligence

IBM is ready to start shipping the first commercial server systems built around its recently released Power9 processor. Dubbed the AC922 Power Systems, these servers will ship by the end of December, and are specifically designed for artificial intelligence (AI) workloads, reports Enterprise Cloud News (Banking Technology's sister publication). The AC922 is the commercial version of the same severs that IBM, along with Nvidia and Mellanox Technologies is using to build two new supercomputers for the US Department of Energy. The "Summit" and "Sierra" supercomputers are expected to go online in 2018, and could reinvigorate the US's standing in the world of high-performance computing. At the heart of the AC922 is IBM's recently released Power9 processor.


IBM's new Power9 chip was built for AI and machine learning

#artificialintelligence

In a world that requires increasing amounts of compute power to handle the resource-intensive demands of workloads like artificial intelligence and machine learning, IBM enters the fray with its latest generation Power chip, the Power9. The company intends to sell the chips to third-party manufacturers and to cloud vendors including Google. Meanwhile, it's releasing a new computer powered by the Power9 chip, the AC922 and it intends to offer the chips in a service on the IBM cloud. "We generally take our technology to market as a complete solution," Brad McCredie, IBM fellow and vice president of cognitive systems explained. The company has designed the new chip specifically to improve performance on common AI frameworks like Chainer, TensorFlow and Caffe, and claims an increase for workloads running on these frameworks by up to almost 4x.


IBM rolls out first Power9 servers, systems optimized for AI

ZDNet

IBM launched its first systems based on its Power9 processor and optimized for artificial intelligence workloads. Big Blue's Power Systems Servers can improve training times of deep learning frameworks by 4x, according to IBM. The Power9 processors and systems built on them are partly the product of collaboration in the OpenPower Foundation, which includes IBM, Google, Mellanox, Nvidia and a bevy of other players. Those technologies are designed to boost bandwidth and throughput in data movement. That movement is what boosts model training time.


What Is the Deep Learning AMI? - Deep Learning AMI

#artificialintelligence

Welcome to the User Guide for the Deep Learning AMI. The Deep Learning AMI (DLAMI) is your one-stop shop for deep learning in the cloud. This customized machine instance is available in most Amazon EC2 regions for a variety of instance types, from a small CPU-only instance to the latest high-powered multi-GPU instances. It comes preconfigured with NVIDIA CUDA and NVIDIA cuDNN, as well as the latest releases of the most popular deep learning frameworks. This guide will help you launch and use the DLAMI.


Amazon And NVIDIA Simplify Machine Learning

#artificialintelligence

While this announcement was completely expected, it is an important milestone along the road to simplifying and lowering the costs of Machine Learning development and deployment for AI projects. When NVIDIA announced the NVIDIA GPU Cloud last May at GTC, I explained in this blog that the purpose was to create a registry of compatible and optimized ML software containers which could then, in theory, run on the cloud of users' choice. That vision has now become a reality, at least for Amazon.com's I expect other Cloud Service Providers to follow soon, given the momentum in the marketplace for the 120 TFLOP Volta GPU's. Why do you need NVIDIA's GPU Cloud for ML?


TensorFlow Gains Hardware Support

#artificialintelligence

There are a number of machine learning (ML) architectures that utilize deep neural networks (DNNs), including AlexNet, VGGNet, GoogLeNet, Inception, ResNet, FCN, and U-Net. These in turn run on frameworks like Berkeley's Caffe, Google's TensorFlow, Torch, Microsoft's Cognitive Toolkit (CNTK), and Apache's mxnet. Of course, support for these frameworks on specific hardware is required to actually run the ML applications. Each framework has advantages and disadvantages. For example, Caffe is an easy platform to start with, especially since ones of its popular uses is image recognition.


GPU-Accelerated Amazon Web Services

#artificialintelligence

Developers, data scientists, and researchers are solving today's complex challenges with breakthroughs in artificial intelligence, deep learning, and high performance computing (HPC). NVIDIA is working with Amazon Web Services to offer the newest and most powerful GPU-accelerated cloud service based on the latest NVIDIA Volta architecture: Amazon EC2 P3 instance. Using up to eight NVIDIA Tesla V100 GPUs, you will be able to train your neural networks with massive data sets using any of the major deep learning frameworks faster than ever before. Then use the capabilities of GPU parallel computing, running billions of computations, to infer and identify known patterns or objects. With over 500 GPU-accelerated HPC applications accelerated, including the top ten HPC applications and every deep learning framework, you can quickly tap into the power of the Tesla V100 GPUs on AWS to boost performance, scale-out, accelerate time to results, and save money.


AWS says its new monster GPU array is 'most powerful' in the cloud

ZDNet

An introduction to cloud computing from IaaS and PaaS to hybrid, public and private cloud. Amazon Web Services (AWS) has launched new P3 instances on its EC2 cloud computing service which are powered by Nvidia's Tesla Volta architecture V100 GPUs and promise to dramatically speed up the training of machine learning models. The P3 instances are designed to handle compute-intensive machine learning, deep learning, computational fluid dynamics, computational finance, seismic analysis, molecular modelling, and genomics workloads. Amazon said the new services could reduce the training time for sophisticated deep learning models from days to hours. These are the first instances to include Nvidia Tesla V100 GPUs, and AWS said its P3 instances are "the most powerful GPU instances available in the cloud".