Goto

Collaborating Authors

Nvidia Rapids cuGraph: Making graph analysis ubiquitous ZDNet

#artificialintelligence

A new open-source library by Nvidia could be the secret ingredient to advancing analytics and making graph databases faster. Nvidia has long ago stopped being "just" a hardware company. As its hardware is what much of the compute supporting the explosion in AI runs on, Nvidia has taken upon itself the task of paving the last mile to the software. Nvidia does this by developing and releasing libraries that software developers and data scientists can use to integrate GPU power in their work. The premise is simple: Not everyone is a specialist in parallelism or wants to be one.


How to use NVIDIA GPUs for Machine Learning with the new Data Science PC from Maingear

#artificialintelligence

Deep Learning enables us to perform many human-like tasks, but if you're a data scientist and you don't work in a FAANG company (or if you're not developing the next AI startup) chances are that you still use good and old (ok, maybe not that old) Machine Learning to perform your daily tasks. One characteristic of Deep Learning is that it's very computationally intensive, so all the main DL libraries make use of GPUs to improve the processing speed. But if you ever felt left out of the party because you don't work with Deep Learning, those days are over: with the RAPIDS suite of libraries now we can run our data science and analytics pipelines entirely on GPUs. In this article we're going to talk about some of these RAPIDS libraries and get to know a little more about the new Data Science PC from Maingear. Generally speaking, GPUs are fast because they have high-bandwidth memories and hardware that performs floating-point arithmetic at significantly higher rates than conventional CPUs [1].


GPU Accelerated Data Analytics & Machine Learning

#artificialintelligence

GPU acceleration is nowadays becoming more and more important. As a demonstration for this shift, an increasing number of online data science platforms is now adding GPU enabled solutions. Some examples are: Kaggle, Google Colaboratory, Microsoft Azure and Amazon Web Services (AWS). In this article, I will first introduce you to the NVIDIA open-source Python RAPIDS libraries and I will then offer you a practical demonstration of how RAPIDS can speed up Data Analysis up to 50 times. All the code used for this article is available on my GitHub and Google Colaboratory for you to play with.


Attend the Data Analytics Conference Sessions at NVIDIA GTC DC 2019

#artificialintelligence

Certificate available Get hands-on with RAPIDS, a collection of data science libraries that allows end-to-end GPU acceleration for data science workflows. You'll learn how to apply a wide variety of GPU-accelerated machine learning algorithms including XGBoost, cuGRAPH, and cuML to perform data analysis at massive scale.


Nvidia GPUs for data science, analytics, and distributed machine learning using Python with Dask

ZDNet

Nvidia has been more than a hardware company for a long time. As its GPUs are broadly used to run machine learning workloads, machine learning has become a key priority for Nvidia. In its GTC event this week, Nvidia made a number of related points, aiming to build on machine learning and extend to data science and analytics. Nvidia wants to "couple software and hardware to deliver the advances in computing power needed to transform data into insights and intelligence." Jensen Huang, Nvidia CEO, emphasized the collaborative aspect between chip architecture, systems, algorithms and applications.