Results


How AI Protects PayPal's Payments and Performance The Official NVIDIA Blog

#artificialintelligence

With advances in machine learning and the deployments of neural networks, logistic regression-powered models are expanding their uses throughout PayPal. PayPal's deep learning system is able to filter out deceptive merchants and crack down on sales of illegal products. Kutsyy explained the machines can identify "why transactions fail, monitoring businesses more efficiently," avoiding the need to buy more hardware for problem solving. The AI Podcast is available through iTunes, DoggCatcher, Google Play Music, Overcast, PlayerFM, Podbay, Pocket Casts, PodCruncher, PodKicker, Stitcher and Soundcloud.


Moore's Law may be out of steam, but the power of artificial intelligence is accelerating

#artificialintelligence

A paper from Google's researchers says they simultaneously used as many as 800 of the powerful and expensive graphics processors that have been crucial to the recent uptick in the power of machine learning (see "10 Breakthrough Technologies 2013: Deep Learning"). Feeding data into deep learning software to train it for a particular task is much more resource intensive than running the system afterwards, but that still takes significant oomph. Intel has slowed the pace at which it introduces generations of new chips with smaller, denser transistors (see "Moore's Law Is Dead. It also motivates the startups--and giants such as Google--creating new chips customized to power machine learning (see "Google Reveals a Powerful New AI Chip and Supercomputer").


Moore's Law may be out of steam, but the power of artificial intelligence is accelerating

#artificialintelligence

A paper from Google's researchers says they simultaneously used as many as 800 of the powerful and expensive graphics processors that have been crucial to the recent uptick in the power of machine learning (see "10 Breakthrough Technologies 2013: Deep Learning"). Feeding data into deep learning software to train it for a particular task is much more resource intensive than running the system afterwards, but that still takes significant oomph. Intel has slowed the pace at which it introduces generations of new chips with smaller, denser transistors (see "Moore's Law Is Dead. It also motivates the startups--and giants such as Google--creating new chips customized to power machine learning (see "Google Reveals a Powerful New AI Chip and Supercomputer").


Deploying Deep Neural Networks with NVIDIA TensorRT

#artificialintelligence

NVIDIA TensorRT is a high-performance deep learning inference library for production environments. Power efficiency and speed of response are two key metrics for deployed deep learning applications, because they directly affect the user experience and the cost of the service provided. Tensor RT automatically optimizes trained neural networks for run-time performance, delivering up to 16x higher energy efficiency (performance per watt) on a Tesla P100 GPU compared to common CPU-only deep learning inference systems (see Figure 1). Figure 2 shows the performance of NVIDIA Tesla P100 and K80 running inference using TensorRT with the relatively complex GoogLenet neural network architecture. In this post we will show you how you can use Tensor RT to get the best efficiency and performance out of your trained deep neural network on a GPU-based deployment platform.


China Pushes Breadth-First Search Across Ten Million Cores

#artificialintelligence

There is increasing interplay between the worlds of machine learning and high performance computing (HPC). This began with a shared hardware and software story since many supercomputing tricks of the trade play well into deep learning, but as we look to next generation machines, the bond keeps tightening. Many supercomputing sites are figuring out how to work deep learning into their existing workflows, either as a pre- or post-processing step, while some research areas might do away with traditional supercomputing simulations altogether eventually. While these massive machines were designed with simulations in mind, the strongest supers have architectures that parallel the unique requirements of training and inference workloads. One such system in the U.S. is the future Summit supercomputer coming to Oak Ridge National Lab later this year, but many of the other architectures that are especially sporting for machine learning are in China and Japan--and feature non-standard processing elements.


Compare NVIDIA Pascal GPUs and Google TPU

#artificialintelligence

The recent TPU paper by Google draws a clear conclusion – without accelerated computing, the scale-out of AI is simply not practical. Today's economy runs in the world's data centers, and data centers are changing dramatically. Not so long ago, they served up web pages, advertising and video content. Now, they recognize voices, detect images in video streams and connect us with information we need exactly when we need it. Increasingly, those capabilities are enabled by a form of artificial intelligence called deep learning.


Machines that learn to do, and do to learn: What is artificial intelligence? Bruegel

#artificialintelligence

Artificial intelligence (AI) refers to intelligence exhibited by machines. It lies at the intersection of big data, machine learning and computer programming. Computer programming contributes the necessary design and operational framework. It can make machines capable of carrying out a complex series of computations automatically. These computations can be linked to specific actions in the case robots (which are in principle programmable through computers).


Which GPU(s) to Get for Deep Learning

#artificialintelligence

Deep learning is a field with intense computational requirements and the choice of your GPU will fundamentally determine your deep learning experience. With no GPU this might look like months of waiting for an experiment to finish, or running an experiment for a day or more only to see that the chosen parameters were off. With a good, solid GPU, one can quickly iterate over deep learning networks, and run experiments in days instead of months, hours instead of days, minutes instead of hours. So making the right choice when it comes to buying a GPU is critical. So how do you select the GPU which is right for you?


Google Reveals Technical Specs and Business Rationale for TPU Processor – PPP Focus

#artificialintelligence

By way of example, the Google engineers said that if people used voice search for three minutes a day, running the associated speech recognition tasks without the TPU would have required the company to have twice as many datacenters. Based on the scant details Google provides about its data center operations – which include 15 major sites – the search-and-ad giant was looking at additional capital expenditures of perhaps $15bn, assuming that a large Google data center costs about $1bn. As it applied machine learning capabilities to more of its products and applications over the past several years, Google said it realized it needed to supercharge its hardware as well as its software. It took years for Kubernetes and TensorFlow to become publicly available, both of which Google had used extensively on its own (albeit in somewhat different forms). Due to that inherent efficiency, the chips can squeeze more operations per second into the silicon using more sophisticated and powerful machine learning models to get results more rapidly.


Competition in AI platform market to heat up in 2017

#artificialintelligence

Intel's Nervana platform is a $400 million investment in AI Back in November, Intel announced what it claims is a comprehensive AI platform for data center and compute applications called Nervana, with its focus aimed directly at taking on Nvidia's GPU solutions for enterprise users. The platform is the result of the chipmaker's acquisition of 48-person startup Nervana Systems back in August for $400 million that was led by former Qualcomm researcher Naveen Rao. Built using FPGA technology and designed for highly-optimized AI solutions, Intel claims Nervana will deliver up to a 100-fold reduction in the time it takes to train a deep learning model within the next three years. The company intends to integrate Nervana technology into Xeon and Xeon Phi processor lineups. During Q1, it will test the Nervana Engine chip, codenamed'Lake Crest,' and make it available to key customers later within the year.