Results


NVIDIA: Open Source Deep Learning is Powering the AI Revolution

#artificialintelligence

In this chapter of our thought leadership series, AI Business caught up with Kari Ann Briski, the Director of Deep Learning Software Product at NVIDIA. Deep learning is being applied to solve many big data problems from computer vision, image recognition, speech recognition, and autonomous vehicles. "I've personally seen so many fun and interesting AI applications, from large organizations to small businesses and individuals who previously knew nothing about deep learning," Kari explains. "NVIDIA heavily contributes to open source projects, both in the frameworks (deep learning libraries) as well as posting neural networks that we have researched for specific AI applications.


5 Reasons Why Your Data Science Team Needs The DGX Station

#artificialintelligence

I immediately pulled a container and started work on a CNTK NCCL project, the next day pulled another container to work on a TF biomedical project. By running Nvidia Optix 5.0 on a DGX Station, content creators can significantly accelerate training, inference and rendering (meaning both AI and graphics tasks). Flexibility to do AI work at the desk, data center, or edge The Fastest Personal Supercomputer for Researchers and Data Scientists 15. www.nvidia.com/dgx-station However, for our current projects we need a compute server that we have exclusive access to." By running Nvidia Optix 5.0 on a DGX Station, content creators can significantly accelerate training, inference and rendering (meaning both AI and graphics tasks).


Scaling TensorFlow and Caffe to 256 GPUs - IBM Systems Blog: In the Making

@machinelearnbot

And since model training is an iterative task, where a data scientist tweaks hyper-parameters, models, and even the input data, and trains the AI models multiple times, these kinds of long training runs delay time to insight and can limit productivity. The IBM Research team took on this challenge, and through innovative clustering methods has built a "Distributed Deep Learning" (DDL) library that hooks into popular open source machine learning frameworks like TensorFlow, Caffe, Torch and Chainer. Figure 1: Scaling results using Caffe to train a ResNet-50 model using the ImageNet-1K data set on 64 Power Systems servers that have a total of 256 NVIDIA P100 GPU accelerators in them. This release includes the distributed deep learning library and a technology preview for the vision capability that we announced in May.


IBM Plays With The AI Giants With New, Scalable And Distributed Deep Learning Software

#artificialintelligence

Today, IBM Research announced a new breakthrough that will only serve to further enhance PowerAI and its other AI offerings--a groundbreaking Distributed Deep Learning (DDL) software, which is one of the biggest announcements I've tracked in this space for the past six months. Most AI servers today are just one single system, not multiple systems combined. To paint a picture, when IBM initially tried to train a model with the ImageNet-22K data set, using a ResNet-101 model, it took 16 days on a single Power "Minsky" server, using four NVIDIA P100 GPU accelerators. To top it all off, IBM says DDL scales efficiently--across up to 256 GPUs, with up to 95% efficiency on the Caffe deep learning framework.


The AI Revolution Is Eating Software: NVIDIA Is Powering It NVIDIA Blog

#artificialintelligence

It's great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics. Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services growing more than 4x over the past year to 1,300. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.


Learning Machine Learning

#artificialintelligence

Massive Open Online Courses (MOOCs) are a good starting point, with a lot to offer. The article entitled "Top Machine Learning MOOCs and Online Lectures: A Comprehensive Survey" lists a number of good resources. For example, the MXNet website lists a number of data set sources for CNNs and RNNs. Intel's Python-based Neon framework from Nervana, now an Intel company, supports platforms like Apache Spark, TensorFlow, Caffe, and Theano.


The Era of AI Computing - Fedscoop

#artificialintelligence

Powering Through the End of Moore's Law As Moore's law slows down, GPU computing performance, powered by improvements in everything from silicon to software, surges. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics. The NVIDIA GPU Cloud platform gives AI developers access to our comprehensive deep learning software stack wherever they want it--on PCs, in the data center or via the cloud. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.


To Accelerate Artificial Intelligence, NVIDIA & Baidu Signed Partnership

#artificialintelligence

NVIDIA and Baidu announced a broad partnership to bring the world's leading artificial intelligence technology j cloud computing, self-driving vehicles and AI home assistants. Speaking in the keynote at Baidu's AI developer conference in Beijing, Baidu president and COO Qi Lu described his company's plans to work with NVIDIA to bring next-generation NVIDIA Volta GPUs to Baidu Cloud, providing cloud customers with the world's leading deep learning platform. Optimize Baidu's PaddlePaddle open source deep learning framework for NVIDIA Volta GPUs and make it widely available to academics and researchers. Our collaboration aligns our exceptional technical resources to create AI computing platforms for all developers – from academic research, startups creating breakthrough AI applications, and autonomous vehicles."


The AI Revolution Is Eating Software: NVIDIA Is Powering It NVIDIA Blog

#artificialintelligence

It's great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics. Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services growing more than 4x over the past year to 1,300. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.


Google's Second AI Chip Crashes Nvidia's Party

#artificialintelligence

On Wednesday at its annual developers conference, the tech giant announced the second generation of its custom chip, the Tensor Processing Unit, optimized to run its deep learning algorithms. In contrast, Nvidia announced its latest generation GPUs in a data center product called the Tesla V100 that deliver 120 teraflops of performance, Nvidia said. Through the Google Cloud, anybody can rent out Cloud TPUs -- similar to how people can rent GPUs on the Google Cloud. "Google's use of TPUs for training is probably fine for a few workloads for the here and now, but given the rapid change in machine learning frameworks, sophistication, and depth, I believe Google is still doing much of their machine learning production and research training on GPUs," said tech analyst Patrick Moorhead.