Goto

Collaborating Authors

Results


Setting up your Nvidia GPU for Deep Learning(2020)

#artificialintelligence

This article aims to help anyone who wants to set up their windows machine for deep learning. Although setting up your GPU for deep learning is slightly complex the performance gain is well worth it * . The steps I have taken taken to get my RTX 2060 ready for deep learning is explained in detail. The first step when you search for the files to download is to look at what version of Cuda that Tensorflow supports which can be checked here, at the time of writing this article it supports Cuda 10.1.To download cuDNN you will have to register as an Nvidia developer. I have provided the download links to all the software to be installed below.


YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design

arXiv.org Artificial Intelligence

The rapid development and wide utilization of object detection techniques have aroused attention on both accuracy and speed of object detectors. However, the current state-of-the-art object detection works are either accuracy-oriented using a large model but leading to high latency or speed-oriented using a lightweight model but sacrificing accuracy. In this work, we propose YOLObile framework, a real-time object detection on mobile devices via compression-compilation co-design. A novel block-punched pruning scheme is proposed for any kernel size. To improve computational efficiency on mobile devices, a GPU-CPU collaborative scheme is adopted along with advanced compiler-assisted optimizations. Experimental results indicate that our pruning scheme achieves 14$\times$ compression rate of YOLOv4 with 49.0 mAP. Under our YOLObile framework, we achieve 17 FPS inference speed using GPU on Samsung Galaxy S20. By incorporating our proposed GPU-CPU collaborative scheme, the inference speed is increased to 19.1 FPS, and outperforms the original YOLOv4 by 5$\times$ speedup.


Which GPU(s) to Get for Deep Learning

#artificialintelligence

Deep learning is a field with intense computational requirements, and your choice of GPU will fundamentally determine your deep learning experience. But what features are important if you want to buy a new GPU? How to make a cost-efficient choice? This blog post will delve into these questions, tackle common misconceptions, give you an intuitive understanding of how to think about GPUs, and will lend you advice, which will help you to make a choice that is right for you. This blog post is designed to give you different levels of understanding about GPUs and the new Ampere series GPUs from NVIDIA. You have the choice: (1) If you are not interested in the details of how GPUs work, what makes a GPU fast, and what is unique about the new NVIDIA RTX 30 Ampere series, you can skip right to the performance and performance per dollar charts and the recommendation section. You might want to skip a section or two based on your understanding of the presented topics. I will head each major section with a small summary, which might help you to decide if you want to read the section or not. This blog post is structured in the following way. First, I will explain what makes a GPU fast. I will discuss CPUs vs GPUs, Tensor Cores, memory bandwidth, and the memory hierarchy of GPUs and how these relate to deep learning performance. These explanations might help you to get a more intuitive sense of what to look for in a GPU. Then I will make theoretical estimates for GPU performance and align them with some marketing benchmarks from NVIDIA to get reliable, unbiased performance data. I discuss the unique features of the new NVIDIA RTX 30 Ampere GPU series that are worth considering if you buy a GPU. From there, I make GPU recommendations for 1-2, 4, 8 GPU setups, and GPU clusters. After that follows a Q&A section of common questions posed to me in Twitter threads; in that section, I will also address common misconceptions and some miscellaneous issues, such as cloud vs desktop, cooling, AMD vs NVIDIA, and others. If you use GPUs frequently, it is useful to understand how they work. This knowledge will come in handy in understanding why GPUs might be slow in some cases and fast in others. In turn, you might be able to understand better why you need a GPU in the first place and how other future hardware options might be able to compete.


Deep Learning NVIDIA GPU Workstations

#artificialintelligence

We understand every development environment is different, so shouldn't you have the option to choose what's best for you? All EMLI (Exxact Machine Learning Images) environments are available in the latest Ubuntu or CentOS Linux versions, and are built to perform right out of the box.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Neuromorphic Chips Take Shape

Communications of the ACM

Chips designed specifically to model the neurons and synapses in the human brain are poised to change computing in profound ways.


Top 10 Processor to Watch in 2020 - Analytics Insight

#artificialintelligence

The Internet of Things (IoT) has sparked the proliferation of connected devices. These devices, which house sensors to collect data of the day-to-day activities or monitoring purposes, are embedded with microcontrollers and microprocessors chips. These chips are mounted based on the data sensor needed to complete an assigned task. So we don't have a one processor fits all architecture. For example, some devices will perform a limited amount of processing on data sets such as temperature, humidity, pressure, or gravity; more complicated systems, however, will need to handle (multiple) high-resolution sound or video streams.


NVIDIA Announces Ampere - The Most Exciting GPU Architecture For Modern AI

#artificialintelligence

The GPU Technology Conference is the most exciting event for the AI and ML ecosystem. From researchers in academia to product managers at hyperscale cloud companies to IoT builders and makers, this conference has something relevant for each of them. As an AIoT enthusiast and a maker, I eagerly look forward to GTC. Due to the current COVID-19 situation, I was a bit disappointed to see the event turning into a virtual conference. But the keynote delivered by Jensen Huang, the CEO of NVIDIA made me forget that it was a virtual event.


r/deeplearning - Is there a viable deep learning platform for python that can utilize an AMD GPU (5700xt) on Windows 10?

#artificialintelligence

I think I am going to echo what has been said. Windoze is not going to cut it. I know you don't want to do Linux for whatever reason; I am/was hardcore Mac and really wanted to find a way to do GPU work in MacOS. I'm now looking at building an all AMD machine with POP! just as a test bed for the ROCm stuff; I am just quirky that way. So, you're a bit ahead of me; I had to buy a PC and then put Linux on it (I deleted Windoze before I even started).


A Survey on Edge Intelligence

arXiv.org Artificial Intelligence

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.