gpu


Lessons Learned Reproducing a Deep Reinforcement Learning Paper

#artificialintelligence

There are a lot of neat things going on in deep reinforcement learning. One of the coolest things from last year was OpenAI and DeepMind's work on training an agent using feedback from a human rather than a classical reward signal. There's a great blog post about it at Learning from Human Preferences, and the original paper is at Deep Reinforcement Learning from Human Preferences. I've seen a few recommendations that reproducing papers is a good way of levelling up machine learning skills, and I decided this could be an interesting one to try with. It was indeed a super fun project, and I'm happy to have tackled it - but looking back, I realise it wasn't exactly the experience I thought it would be. If you're thinking about reproducing papers too, here are some notes on what surprised me about working with deep RL.


Tensorflow Image: Augmentation on GPU – Towards Data Science

#artificialintelligence

Here we are going to see different type of Augmentations that can be applied to images. One the most basic Augmentations is to apply the flipping to image which can double the data (based on how you apply). Random flipping: With a 1 in 2 chance your image will be flipped horizontally or vertically. Alternatively you can also use tf.reverse for the same. Image will be rotated k times 90 degrees in counter-clockwise direction.


Intel offloads virus scanning to the GPU for better battery life and performance

PCWorld

Intel actually has nearly a dozen different technologies that it has developed to secure PCs--many of which fly beneath the radar, even those that it has marketed at consumers, like True Key. Intel's sought to lock down the PC from the BIOS, to the OS, to the apps and data. Intel's final announcement was what it called Intel Security Essentials, a way to standardize the security features built into the Atom, Core, and Xeon processors so that developers could build applications that take advantage of these in a consistent way.


TensorFlow in your Browser

@machinelearnbot

If you want to explore machine learning, you can now write applications that train and deploy TensorFlow in your browser using JavaScript. We know what you are thinking. That has to be slow. Surprisingly, it isn't, since the libraries use Graphics Processing Unit (GPU) acceleration. Of course, that assumes your browser can use your GPU.


CPU is from Mars, GPU is from Venus Lanner

#artificialintelligence

Use of GPU is fast expanding outside of the 3D video game realm and offering numerous benefits for enterprise as well as industrial applications. With Deep learning taking a center stage in the industrial 4.0 revolution, GPU and x86 CPU manufacturers are ensuring that solution developers are not restrained by the range of options when it comes to choosing the right silicon for their product. So let's review what GPU can do differently from a CPU and vice versa, also how they make the perfect couple in the world of robot surgeons, cryptocurrencies, smart factories and self-driving cars. Let's review one by one and discuss their basic differentiating characteristics. The central processing unit (CPU) of a computer is often referred to as its brain where all the processing and multitasking takes place.


The Argument for Accelerated and Integrated Analytics The Official NVIDIA Blog

#artificialintelligence

The rise of modern business intelligence (BI) has seen the emergence of a number of component parts designed to support the different analytical functions necessary to deliver what enterprises require. Perhaps the most fundamental component of the BI movement is the traditional frontend or visualization application. Companies like Tableau, Qlik, Birst, Domo and Periscope provide these. There are dozens more -- all with essentially equivalent capabilities: the ability to make spreadsheets look beautiful. Some of these companies have been tremendously successful, primarily differentiating themselves on the axis of usability.


Nvidia accelerates artificial intelligence, analytics with an ecosystem approach

ZDNet

This proclamation, from NVIDIA co-founder, president, and CEO Jensen Huang at the GPU Technology Conference (GTC), held from March 26 to March 29 in San Jose, Calif., only hints at this company's growing impact on state-of-the-art computing. Read also: Nvidia's new supercomputer Clara designed to act as hospital processing hub Nvidia's physical products are accelerators (for third-party hardware) and the company's own GPU-powered workstations and servers. Jensen Huang, co-founder, president, and CEO at Nvidia, presents the sweep of the company's growing AI Platform at GTC 2018 in San Jose, Calif. On the hardware front, the headlines from GTX built on the foundation of Nvidia's graphical processing unit advances. If the "feeds and speeds" stats mean nothing to you, let's put them into the context of real workloads.


GTC 2018 Keynote with NVIDIA CEO Jensen Huang

#artificialintelligence

Watch a replay of NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2018 in Silicon Valley, where he unveiled a series of advances to NVIDIA's deep learning computing platform that deliver a 10x performance boost on deep learning workloads; launched the Quadro GV100 GPU, transforming workstations with 118.5 TFLOPS of deep learning performance; introduced NVIDIA DRIVE Constellation to run self-driving car systems for billions of simulated miles, and much more.


Picking a GPU for Deep Learning – Slav

@machinelearnbot

Deep Learning (DL) is part of the field of Machine Learning (ML). DL works by approximating a solution to a problem using neural networks. One of the nice properties of about neural networks is that they find patterns in the data (features) by themselves. This is opposed to having to tell your algorithm what to look for, as in the olde times. However, often this means the model starts with a blank state (unless we are transfer learning).


Adobe and Nvidia expand partnership for Sensei AI ZDNet

#artificialintelligence

Adobe and Nvidia have announced a partnership that will see both companies deliver new artificial intelligence (AI) and deep learning services for Adobe Creative. Making the announcement during the Adobe Summit keynote in Las Vegas on Wednesday, Adobe CEO and president Shantanu Narayen was joined by Nvidia founder and CEO Jensen Huang. Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them. The CEOs said the partnership will see both companies work to optimise the Adobe Sensei AI and machine learning framework for Nvidia GPUs.