Results


NVIDIA Targets Next AI Frontiers: Inference And China

#artificialintelligence

NVIDIA's meteoric growth in the datacenter, where its business is now generating some $1.6B annually, has been largely driven by the demand to train deep neural networks for Machine Learning (ML) and Artificial Intelligence (AI)--an area where the computational requirements are simply mindboggling. First, and perhaps most importantly, Huang announced new TensorRT3 software that optimizes trained neural networks for inference processing on NVIDIA GPUs. In addition to announcing the Chinese deployment wins, Huang provided some pretty compelling benchmarks to demonstrate the company's prowess in accelerating Machine Learning inference operations, in the datacenter and at the edge. In addition to the TensorRT3 deployments, Huang announced that the largest Chinese Cloud Service Providers, Alibaba, Baidu, and Tencent, are all offering the company's newest Tesla V100 GPUs to their customers for scientific and deep learning applications.


Vincent AI Sketch Demo Draws In Throngs at GTC Europe The Official NVIDIA Blog

@machinelearnbot

The story behind the story: a finely tuned generative adversarial network that sampled 8,000 great works of art -- a tiny sample size in the data-intensive world of deep learning -- and in just 14 hours of training on an NVIDIA DGX system created an application that takes human input and turns it into something stunning. Building on thousands of hours of research undertaken by Cambridge Consultants' AI research lab, the Digital Greenhouse, a team of five built the Vincent demo in just two months. After Huang's keynote, GTC attendees had the opportunity to pick up the stylus for themselves, selecting from one of seven different styles to sketch everything from portraits to landscapes to, of course, cats. While traditional deep learning algorithms have achieved stunning results by ingesting vast quantities of data, GANs create applications out of much smaller sample sizes by training one neural network to try to imitate the data they're fed, and another to try to spot fakes.


Smartphone App Detects Concussions on the Field NVIDIA Blog

#artificialintelligence

Just in time for the fall sports season, researchers are developing an AI-powered app that detects concussions right on the playing field. Working with a team of UW researchers and clinicians, he is using GPU-accelerated deep learning to create an app that detects concussions and other traumatic brain injuries with nothing more than a smartphone camera and 3D-printed box. The app, called PupilScreen, assesses the pupil's response to light almost as well as a pupilometer, an expensive machine found only in clinical settings. In a pilot study of 42 patients with and without traumatic brain injury, the app tracked pupil size almost as well the pupilometer.


Nvidia's Radical Move to Release AI Chips Design to Open Source

#artificialintelligence

Nvidia has gone ahead with open sourcing the design of one of its AI chips designed to power deep learning. And, by releasing its chip design to open source, Nvidia wants AI chip makers to help bridge this gap. With other chip manufacturers using its chip design technology, Nvidia plans to augment sale of its other hardware and software. The chip module, known as Deep Learning Accelerator (DLA), for which Nvidia has released the design to open source is used for autonomous vehicles and associated technologies.


Tensorflow Tutorial, Part 2 – Getting Started

@machinelearnbot

The second part is a tensorflow tutorial on getting started, installing and building a small use case. Different operating systems have different means to install tensorflow. In this case, we will generate house size to predict house prices. We will train our model on the training data and test our model on the test data to see how accurate our predictions are.


Will machine learning save the enterprise server business?

#artificialintelligence

Neural networks apply computational resources to solve machine learning linear algebra problems with very large matrices, iterating to make statistically accurate decisions. Most of the machine learning models in operation today started in academia, such as natural language or image recognition, and were further researched by large well-staffed research and engineering teams at Google, Facebook, IBM and Microsoft. Enterprise machine learning experts and data scientists will have to start from scratch with research and iterate to build new high-accuracy models. It is a specialty business because the enterprises need four characteristics not necessarily found together: a large corpus of data for training, highly skilled data scientists and machine learning experts, a strategic problem that machine learning can solve, and a reason not to use Google's or Amazon's pay-as-you-go offerings.


To Compete With New Rivals, Chipmaker Nvidia Shares Its Secrets

#artificialintelligence

Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence. Longtime chip kingpin Intel and a stampede of startups are building and offering chips to power smart machines. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. In a tweet this week, one Intel engineer called Nvidia's open source tactic a "devastating blow" to startups working on deep learning chips.


Tensorflow Tutorial : Part 2 – Getting Started

@machinelearnbot

The second part is a tensorflow tutorial on getting started, installing and building a small use case. Different operating systems have different means to install tensorflow. In this case, we will generate house size to predict house prices. We will train our model on the training data and test our model on the test data to see how accurate our predictions are.


With one eye on Amazon, Walmart plans to develop its own artificial intelligence networks

#artificialintelligence

The retailer is planning to build a neural network cluster based on Nvidia's AI chips over the rest of the year, according to Global Equities Research analyst Trip Chowdry, as reported by Barron's. The cluster will allow Walmart's OneOps team, which builds and maintains the company's internal application development system, to build a series of neural networks in order to train AI systems within current and future applications. Whole Foods gives Amazon Web Services' artificial intelligence team reams of data on shopper behavior to study and train its own AI systems, and AWS will be able to use Whole Foods stores to test drive AI-related services that could eventually become part of the core AWS product lineup. The problem is that there are only a handful of companies that can compete at the highest levels of artificial intelligence research, and Walmart isn't usually mentioned in the same breath as Amazon, Microsoft, Google, Baidu, Facebook, and others.


Scaling TensorFlow and Caffe to 256 GPUs - IBM Systems Blog: In the Making

@machinelearnbot

And since model training is an iterative task, where a data scientist tweaks hyper-parameters, models, and even the input data, and trains the AI models multiple times, these kinds of long training runs delay time to insight and can limit productivity. The IBM Research team took on this challenge, and through innovative clustering methods has built a "Distributed Deep Learning" (DDL) library that hooks into popular open source machine learning frameworks like TensorFlow, Caffe, Torch and Chainer. Figure 1: Scaling results using Caffe to train a ResNet-50 model using the ImageNet-1K data set on 64 Power Systems servers that have a total of 256 NVIDIA P100 GPU accelerators in them. This release includes the distributed deep learning library and a technology preview for the vision capability that we announced in May.