Results


Nvidia aims for level 5 vehicle autonomy with Pegasus

ZDNet

By the middle of 2018, Nvidia believes it will have a system capable of level 5 autonomy in the hands of the auto industry, which will allow for fully self-driving vehicles. Pegasus is rated as being capable of 320 trillion operations per second, which the company claims is a thirteen-fold increase over previous generations. In May, Nvidia took the wraps off its Tesla V100 accelerator aimed at deep learning. The company said the V100 has 1.5 times the general-purpose FLOPS compared to Pascal, a 12 times improvement for deep learning training, and six times the performance for deep learning inference.


Google's dedicated TensorFlow processor, or TPU, crushes Intel, Nvidia in inference workloads - ExtremeTech

@machinelearnbot

First, Turbo mode and GPU Boost were disabled for both the Haswell and Nvidia GPUs, not to artificially tilt the score in favor of the TPU, but because Google's data centers prioritize dense hardware packing over raw performance. As for Nvidia's K80, the test server in question deployed four K80 cards with two GPUs per card, for a total of eight GPU cores. Packed that tightly, the only way to take advantage of the GPU's boost clock without causing an overheat would have been to remove two of the K80 cards. Since the clock frequency increase isn't nearly as potent as doubling the total number of GPUs in the server, Google leaves boost disabled on these server configurations.


Top 5 Deep Learning and AI Stories - October 6, 2017

#artificialintelligence

"This is the first time Oracle has offered access to GPU acceleration, reflecting an industry-wide move to provide access to cloud hardware optimized for artificial intelligence and machine learning. On Tuesday, an international team of chemists -- Jacques Dubochet, Joachim Frank and Richard Henderson -- won the prize for their work with cryogenic electron microscopy, which allows scientists to see the detailed protein structures that drive the inner workings of cells. "This is the first time Oracle has offered access to GPU acceleration, reflecting an industry-wide move to provide access to cloud hardware optimized for artificial intelligence and machine learning. On Tuesday, an international team of chemists -- Jacques Dubochet, Joachim Frank and Richard Henderson -- won the prize for their work with cryogenic electron microscopy, which allows scientists to see the detailed protein structures that drive the inner workings of cells.


Despite the hype, nobody is beating Nvidia in AI

#artificialintelligence

Investors say this isn't even the top for Nvidia: William Stein at SunTrust Robinson Humphrey predicts Nvidia's revenue from selling server-grade GPUs to internet companies, which doubled last year, will continue to increase 61% annually until 2020. The most well-known of these next-generation chips is Google's Tensor Processing Unit (TPU), which the company claims is 15-30 times faster than others' central processing units (CPUs) and GPUs. Even disregarding the market advantage of capturing a strong initial customer base, Wang notes that the company is also continuing to increase the efficiency of GPU architecture at a rate fast enough to be competitive with new challengers. It currently supports every major machine-learning framework; Intel supports four, AMD supports two, Qualcomm supports two, and Google supports only Google's.


Installing Nvidia, Cuda, CuDNN, TensorFlow and Keras

@machinelearnbot

In this post I will outline how to install the drivers and packages needed to get up and running with TensorFlow's deep learning framework. To start, install Ubuntu 14.04 Server. Download the Cuda 7.5 library run file, using wget and install the driver, the toolkit, and samples. CuDNN is a library that helps accelerate deep learning frameworks, such as TensorFlow or Theano.


Tensorflow Tutorial : Part 2 – Getting Started

@machinelearnbot

The second part is a tensorflow tutorial on getting started, installing and building a small use case. Different operating systems have different means to install tensorflow. In this case, we will generate house size to predict house prices. We will train our model on the training data and test our model on the test data to see how accurate our predictions are.


Where Major Chip Companies Are Investing In AI, AR/VR, And IoT

#artificialintelligence

We dug into the private market bets made by major computer chip companies, including GPU makers. Our analysis encompasses the venture arms of NVIDIA, Intel, Samsung, AMD, and more. Meanwhile, the vast application of graphics hardware in AI has propelled GPU (graphics processing unit) maker NVIDIA into tech juggernaut status: the company's shares were the best-performing stock over the past year. Also included in the analysis are 7 chip companies we identified as active in private markets, including NVIDIA, AMD, and ARM.


Intel Proposes Its Embedded Processor Graphics For Real-Time Artificial Intelligence

#artificialintelligence

Further research told me that along with FPGA (Field Programmable Field Gate Array), there's an embedded Intel Processor Graphics for deep learning inference. Unlike the Project BrainWave of Microsoft (which only relies on Altera's Stratix 10 FPGA to accelerate deep learning inference), Intel's Inference Engine design uses integrated GPUs alongside FPGAs. However, embedded Intel's Processor Graphics and Altera's Stratix 10 FPGA could be the top hardware products for deep learning inference accelerations. Marketing its embedded graphics processors to accelerate deep learning/artificial intelligence computing is one more reason for us to stay long INTC.


Which GPU(s) to Get for Deep Learning

@machinelearnbot

With a good, solid GPU, one can quickly iterate over deep learning networks, and run experiments in days instead of months, hours instead of days, minutes instead of hours. Later I ventured further down the road and I developed a new 8-bit compression technique which enables you to parallelize dense or fully connected layers much more efficiently with model parallelism compared to 32-bit methods. For example if you have differently sized fully connected layers, or dropout layers the Xeon Phi is slower than the CPU. GPUs excel at problems that involve large amounts of memory due to their memory bandwidth.


The AI Revolution Is Eating Software: NVIDIA Is Powering It NVIDIA Blog

#artificialintelligence

It's great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics. Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services growing more than 4x over the past year to 1,300. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.