Goto

Collaborating Authors

Results


Nvidia DLSS Is Building a Walled Garden, and It's Working

#artificialintelligence

I just reviewed AMD's new Radeon RX 6600, which is a budget GPU that squarely targets 1080p gamers. It's a decent option, especially in a time when GPU prices are through the roof, but it exposed a trend that I've seen brewing over the past few graphics card launches. Nvidia's Deep Learning Super Sampling (DLSS) tech is too good to ignore, no matter how powerful the competition is from AMD. In a time when resolutions and refresh rates continue to climb, and demanding features like ray tracing are becoming the norm, upscaling is essential to run the latest games in their full glory. AMD offers an alternative to DLSS in the form of FidelityFX Super Resolution (FSR).


AI Fueling a Technological Revolution in Africa

#artificialintelligence

AI is at play on a global stage, and local developers are stealing the show. Grassroot communities are essential to driving AI innovation, according to Kate Kallot, head of emerging areas at NVIDIA. On its opening day, Kallot gave a keynote speech at the largest AI Expo Africa to date, addressing a virtual crowd of 10,000 people. She highlighted how AI can fuel technological and creative revolutions around the world. Kallot also shared how NVIDIA supports developers in emerging markets to build and scale their AI projects, including through the NVIDIA Developer Program, which has more than 2.5 million members; the NVIDIA Inception Program, which offers go-to-market support, expertise and technology for AI, data science and HPC startups; and the NVIDIA Deep Learning Institute, which offers educational resources for anyone who wants to learn about all things AI. "I hope to inspire you on ways to fuel your own applications and help advance the African AI revolution," Kallot said.


Nvidia's Canvas AI Painting Tool Turns Color Blobs into Realistic Imaginaries

#artificialintelligence

Nvidia's New Canvas tool, which is now available as a beta-free version, is a real-time painting tool GauGAN that can be used by anyone with an NVIDIA RTX GPU. It allows creators to create a rough landscape by painting blobs and then it fills it with convincingly photorealistic content. Each distinct color on the tool represents a different type of feature: water, mountains, grass, ruins, and others. The crude sketch is passed to a generative adversarial network when colors are blobbed onto the canvas. GANs essentially pass content to AI creator that tries to make a realistic image.


Does using Nvidia's DLSS require a better CPU? Ask an expert

PCWorld

Q: If you want to use DLSS to make a game run faster, does it need a better CPU since the GPU is utilized less? To bring everyone on the same page, let's recap first what DLSS is. Deep Learning Super Sampling, or DLSS, is Nvidia's proprietary technology that uses machine learning and dedicated hardware on the company's RTX cards to internally render games at a lower resolution, then upscale and output it to the desired resolution. The first version of this tech didn't look great, but version 2.0 brought massive improvements. Now, not only does the final result look virtually the same as native rendering in most circumstances, but you also get higher performance--meaning you can either enjoy those higher framerates at higher resolutions, or switch on a resource-intensive setting like raytracing without suffering for it.


How to Download, Install and use Nvidia GPU for tensorflow on windows

#artificialintelligence

This article was published as a part of the Data Science Blogathon. "Graphics has lately made a great shift towards machine learning, which itself is about understanding data" _ Jefferson Han, Founder and Chief Scientist of Perceptive Pixel CPU's can fetch data at a faster rate but cannot process more data at a time as CPU has to make many iterations to main memory to perform a simple task. Campus executes jobs sequentially and has fewer cores but GPUs come with hundreds of smaller cores working in parallel making GPU a highly parallel architecture thereby improving the performance. Tensorflow GPU can work only if you have a CUDA enabled graphics card. All the newer NVidia graphics cards within the past three or four years have CUDA enabled.


Nvidia will buy Arm for up to $40 billion, combining smartphone, GPU powerhouses

PCWorld

Nvidia agreed to purchase Arm for up to $40 billion in cash and stock, the companies said Sunday night. This mammoth deal in the chip industry is expected to bolster AI and GPU powerhouse Nvidia's chip portfolio, even as it's sure to attract antitrust attention in the smartphone market. Nvidia will pay Softbank, the company's current owner, a total of $21.5 billion in Nvidia stock and $12 billion in cash, including $2 billion payable at signing. Nvidia will also issue $1.5 billion in equity to Arm employees. It may also pay Softbank up to $5 billion in cash or stock if Arm meets specific financial performance targets--bringing the final purchase price up to $40 billion -- the largest chip deal ever.


Introducing GeForce RTX 30 Series GPUs.

#artificialintelligence

A decade ago, GPUs were judged on whether they could power through Crysis. The latest NVIDIA Ampere GPU architecture, unleashed in May to power the world's supercomputers and hyperscale data centers, has come to gaming. "If the last 20 years was amazing, the next 20 will seem like nothing short of science fiction," Huang said, speaking from the kitchen of his Silicon Valley home. Today's NVIDIA Ampere launch is "a giant step into the future," he added. In addition to the trio of new GPUs -- the flagship GeForce RTX 3080, the GeForce RTX 3070 and the "ferocious" GeForce RTX 3090 -- Huang introduced a slate of new tools for GeForce gamers.


Windows 10 Linux subsystem: You get GPU acceleration – with Intel, AMD, Nvidia drivers

ZDNet

Nvidia, Intel and AMD have announced their support for Microsoft's new effort to bring graphics processor support to the Windows 10 Windows Subsystem for Linux to enhance machine-learning training. GPU support for WSL arrived on Wednesday in the Dev Channel preview of Windows 10 build 20150 under Microsoft's reorganized testing structure, which lets it test Windows 10 builds that aren't tied to a specific future feature release. Microsoft announced upcoming GPU support for WSL a few weeks ago at Build 2020, along with support for running Linux GUI apps. The move on GPU access for WSL is intended to bring the performance of applications running in WSL2 up to par with those running on Windows. GPU compute support is the feature most requested by WSL users, according to Microsoft. The 20150 update includes support for Nvidia's CUDA parallel computing platform and GPUs, as well as GPUs from AMD and Intel.


NVIDIA Announces Ampere - The Most Exciting GPU Architecture For Modern AI

#artificialintelligence

The GPU Technology Conference is the most exciting event for the AI and ML ecosystem. From researchers in academia to product managers at hyperscale cloud companies to IoT builders and makers, this conference has something relevant for each of them. As an AIoT enthusiast and a maker, I eagerly look forward to GTC. Due to the current COVID-19 situation, I was a bit disappointed to see the event turning into a virtual conference. But the keynote delivered by Jensen Huang, the CEO of NVIDIA made me forget that it was a virtual event.


EETimes - Nvidia Reinvents GPU, Blows Previous Generation Out of the Water -

#artificialintelligence

Jensen Huang's much-anticipated keynote speech today, postponed from Nvidia's GPU Technology Conference (GTC) in March, will unveil the company's eighth-generation GPU architecture. Emerging three years after the debut of the previous generation Volta architecture, Ampere is said to be the biggest generational leap in the company's history. Ampere is built to accelerate both AI training and inference, as well as data analytics, scientific computing and cloud graphics. The first chip built on Ampere, the A100, has some pretty impressive vital statistics. Nvidia claims the A100 has 20x the performance of the equivalent Volta device for both AI training (single precision, 32-bit floating point numbers) and AI inference (8-bit integer numbers).