gpu


Build a Deep Learning Rig for $800 – Towards Data Science

#artificialintelligence

I was introduced to deep learning as part of Udacity's Self-Driving Car Nanodegree (SDCND) program, which I started in November. Some of our projects required building deep neural networks for tasks such as classifying traffics signs, and using behavior cloning to train a car to drive autonomously in a simulator. However, my MacBook Pro was not up to the task of training neural networks. I used AWS for my first deep learning project, and while it's a viable option, I decided to build my own machine for greater flexibility and convenience. I also plan to do a lot of deep learning outside of the nanodegree, such as Kaggle competitions and side projects, so it should end up being the more cost effective option as well.


NVIDIAVoice: Deep Learning Implementers Speak! Best Practices for Better Productivity, Performance and Scale

Forbes Technology

Consumer GPU's are good enough, right? Do I need to go on-premises for training? Will my developers really notice the difference between one GPU platform versus the other? All great questions, and everyone mulls these over at some point. Like anything, it's often helpful to consult with a peer who's done it before.


AI caramba! Nvidia devs get a host of new kit to build smart systems

#artificialintelligence

Nvidia has released a bunch of new tools for savvy AI developers in time for the Computer Vision and Pattern Recognition conference in Salt Lake City on Tuesday. Some of them have been previously announced at its GPU Technology Conference (GTC) earlier this year. The beta platform for using graphics cards with the Kubernetes system is now available for developers to test out. It's aimed at enterprises dealing with heavy AI workloads that need to be shared across multiple GPU cloud clusters. Large datasets and models take a long time to train so using Kubernetes will speed up training and inference.


Management AI: GPU and FPGA, Why They Are Important for Artificial Intelligence

#artificialintelligence

In business software, the computer chip has been forgotten. Robotics has been more tightly tied to individual hardware devices, so manufacturing applications are still a bit more focused on hardware. The current state of Artificial Intelligence (AI), in general, and Deep Learning (DL) in specific, is more tightly tying hardware to software than at any time in computers since the 1970s. While my last few "management AI" articles were about overfit and bias, two key risks in a machine learning (ML) system. This column digs deeper to address the question many managers, especially business line managers, might have about the hardware acronyms constantly mentioned in the ML ecosystem: Graphics Processing Unit (GPU) and Field Programmable Gate Array (FPGA).


Understanding Mainstream Chips Used in Artificial Intelligence - DZone AI

#artificialintelligence

On January 2018, the International Consumer Electronics Show (CES) kicked off in Las Vegas, Nevada, featuring more than 4,000 exhibitors. CES is the world's largest consumer electronics show and the "SuperBowl" for global consumer electronics and consumer technology. Industry giants such as Qualcomm, NVIDIA, Intel, LG, IBM, Baidu, took this opportunity to publicly reveal their latest and greatest AI chips, products, and strategies. AI related technologies and products were one of the hot topics at this year's show, with embedded AI products receiving the most widespread attention. The current advanced AI development strategy is deep learning with a learning process divided into two parts: training and inference.


AMD shows Vega 7nm 32GB HBM2

#artificialintelligence

Vega 7nm or Vega 20 as AMD used to call this GPU on its roadmap a while ago, naturally increased the performance in deep learning and artificial intelligence but at the same time adding 32GB of HBM 2 made it too expensive to launch as a gaming part. This is a "we told you so" moment. The original Vega 64 and 56 only made profit as the market went crazy for any hardware suitable for cryptocurrency mining. Since this interest wave seems to be over, AMD could not bet on making let's say, a 16GB HBM 2 Vega 7nm gaming part, as the BOM would still be too expensive. David Wang SVP of Engineering at AMD, an ex-Synaptics chap who took over after Raja left has, went into a few more details about Vega 7nm.


AMD Unleashes 32-Core Processor While Betting on Machine Learning in GPU - GuruFocus.com

#artificialintelligence

Advanced Micro Devices (NASDAQ:AMD) is rallying as the company unveiled the world's first 7nm graphics processing unit (GPU) alongside the new 32 core Ryzen Threadripper processor at Computex Tapai 2018 through a live stream yesterday. The next generation Vega GPU products will be based on GlobalFoundries' 7nm technology and are expected to be launched during the second half of 2018. The new 32 core Threadripper CPU, however, will be based on a 12nm process technology and is set to debut during the third quarter of 2018. Furthermore, AMD continues to make strides on the server side as it will start sampling the 7nm EPYC – processors targeting data centers and servers – during the second half of 2018. "At Computex 2018 we demonstrated how the strongest CPU and GPU product portfolio in the industry gets even stronger in the coming months," said AMD President and CEO Dr. Lisa Su in a press release.


AMD Plugs Machine Learning Into Upcoming Vega 7nm GPU

Forbes Technology

Putting together bits of information dropped during AMD's PC-heavy hour-and-a-half presentation, it becomes apparent that Vega 7nm is finally aimed at high performance deep learning (DL) and machine learning (ML) applications – artificial intelligence (AI), in other words. AMD's EPYC successes may be paving the way for Vega 7nm in cloud AI training and inference applications. AMD claims that the 7nm process node it has co-developed with its fab partners will yield twice the transistor density, twice the power efficiency and about a third more performance than its 14nm process node. An educated guess says that not all Vega 7nm products will sport this high-end memory configuration – I think that showing off 32GB was a pointed message to AMD's cloud customers. AMD's Infinity Fabric interface will enable high bandwidth, coherent memory communications between Vega 7nm chips and AMD Zen processor chips, such as AMD's Zen2 7nm server chips.


GPU Exploration of Two-Player Games with Perfect Hash Functions

AAAI Conferences

In this paper we improve solving two-player games by computing the game-theoretical value of every reachable state. A graphics processing unit located on the graphics card is used as a co-processor to accelerate the solution process. We exploit perfect hash functions to store the game states efficiently in memory and to transfer their ordinal representation between the host and the graphics card. As an application we validate Gasser's results that Nine-Men- Morris is a draw on a personal computer. Moreover, our solution is strong, while for the opening phase Gasser only provided a weak solution.


Machine Learning with C - Polynomial Regression on GPU

@machinelearnbot

Hello, this is my second article about how to use modern C for solving machine learning problems. This time I will show how to make a model for polynomial regression problem described in previous article, but now with another library which allows you to use your GPU easily. For this tutorial I chose MShadow library, you can find documentation for it here. This library was chosen because it is actively developed now, and used as a basis for one of a wide used deep learning framework MXNet. Also it is a header only library with minimal dependencies, so it's integration is not hard at all.