Goto

Collaborating Authors

Nvidia GPUs for data science, analytics, and distributed machine learning using Python with Dask

ZDNet

Nvidia has been more than a hardware company for a long time. As its GPUs are broadly used to run machine learning workloads, machine learning has become a key priority for Nvidia. In its GTC event this week, Nvidia made a number of related points, aiming to build on machine learning and extend to data science and analytics. Nvidia wants to "couple software and hardware to deliver the advances in computing power needed to transform data into insights and intelligence." Jensen Huang, Nvidia CEO, emphasized the collaborative aspect between chip architecture, systems, algorithms and applications.


'Significant and fundamental' Windows tweak sets the stage for GPU innovation

PCWorld

Over the past week, both Nvidia and AMD released fresh graphics cards drivers that unlocked "hardware-accelerated GPU scheduling," a new feature introduced in the Windows 10 May 2020 Update. Now, Microsoft has revealed what that setting does, and it turns out that the innocuous-sounding, opt-in feature actually represents a fundamental redesign in how the Windows Display Driver Model works. You probably won't see any significant performance changes from activating it now, however. Historically, the GPU scheduler for the Windows Display Driver Model (WDDM) relied on "a high-priority thread running on the CPU that coordinates, prioritizes, and schedules the work submitted by various applications," explains Steve Pronovost, the lead and architect for the Windows Graphics Kernel. That introduces inherent latency, however, as the CPU registers user inputs a frame before the GPU renders it.


No Nonsense Nvidia: A Rebuttal

#artificialintelligence

Nvidia (NASDAQ:NVDA) has the hardware lead in deep learning, full stop. I have explained why this is so in an article I published last May. Since then, Nvidia investors have enjoyed outsized gains, which recently has brought about a number of articles speculating about an imminent reversal. This article is a rebuttal on a recent piece about Nvidia's AI perspectives, and possible threats from specialty deep learning hardware. Giving my opinion as a deep learning researcher, the recent piece contains a number of technical inaccuracies.


NVIDIA, Open-Source Ecosystem Accelerate Data Science NVIDIA Blog

#artificialintelligence

No matter the industry, data science has become a universal toolkit for businesses. Data analytics and machine learning give organizations insights and answers that shape their day-to-day actions and future plans. Being data-driven has become essential to lead any industry. While the world's data doubles each year, CPU computing has hit a brick wall with the end of Moore's law. For this reason, scientific computing and deep learning have turned to NVIDIA GPU acceleration.


Generating a Long Range Plan for a New Class of Astronomical Observatories

AAAI Conferences

We present a long range planning (LRP) system, the Spike Plan Window Scheduler, which has been in use for observations on the Hubble Space Telescope (HST) for the past four years and which is being adapted for the Space Infrared Telescope Facility (SIRTF) and Next Generation Space Telescope (NGST) orbiting astronomical observatories. Due to the relatively underconstrained nature of this domain, generating a long range plan is not handled in the traditional AI planning sense of generating operators to achieve goals. Rather, producing an LRP is treated as a type of scheduling problem where what is being scheduled are not the scientific observations themselves, but "plan windows" for the scientific observations. This paper investigates planning subproblems which arise in this type of domain. In particular, we discuss the SIRTF Long Range Plan which requires planning of "instrument campaigns" in conjunction with observation plan window scheduling.