Goto

Collaborating Authors

Results


Nvidia launches a new GPU architecture and the Grace CPU Superchip – TechCrunch

#artificialintelligence

At its annual GTC conference for AI developers, Nvidia today announced its next-gen Hopper GPU architecture and the Hopper H100 GPU, as well as a new data center chip that combines the GPU with a high-performance CPU, which Nvidia calls the "Grace CPU Superchip" (not to be confused with the Grace Hopper Superchip). With Hopper, Nvidia is launching a number of new and updated technologies, but for AI developers, the most important one may just be the architecture's focus on transformer models, which have become the machine learning technique de rigueur for many use cases and which powers models like GPT-3 and asBERT. The new Transformer Engine in the H100 chip promises to speed up model training by up to six times and because this new architecture also features Nvidia's new NVLink Switch system for connecting multiple nodes, large server clusters powered by these chips will be able to scale up to support massive networks with less overhead. "The largest AI models can require months to train on today's computing platforms," Nvidia's Dave Salvator writes in today's announcement. AI, high performance computing and data analytics are growing in complexity with some models, like large language ones, reaching trillions of parameters.


GTC 2022: Nvidia flexes its GPU and platform muscles

#artificialintelligence

Did you miss a session at the Data Summit? Nvidia packed about three years' worth of news into its GPU Technology Conference today. Flamboyant CEO Jensen Huang's 1 hour, 39-minute keynote covered a lot of ground, but the unifying themes to the majority of the two dozen announcements were GPU-centered and Nvidia's platform approach to everything it builds. Most people know Nvidia as the world's largest manufacturer of a graphics processing unit, or GPU. The GPU is a chip that was first used to accelerate graphics in gaming systems.


Why scrapping Nvidia Arm deal is ultimately bad for the industry

ZDNet

The largest proposed semiconductor acquisition in IT history – Nvidia merging with Arm – was called off today due to significant regulatory challenges, with antitrust issues being the main hurdle. The $40 billion deal was initially announced in September 2020, and there has been wide speculation that this would eventually be the outcome based on several factors that I believed were either not true or overblown. Before I get into that, it's important to understand why this deal was so important. Nvidia's core product is the graphics processing unit, or GPU, which was initially used to improve graphics capabilities on computers for uses such as gaming. It just so happens that the architecture of a GPU makes it ideal for other tasks that require accelerated computing, such as real-time graphics rendering, virtual reality, and artificial intelligence.


Nvidia's AI-powered scaling makes old games look better without a huge performance hit

#artificialintelligence

Nvidia's latest game-ready driver includes a tool that could let you improve the image quality of games that your graphics card can easily run, alongside optimizations for the new God of War PC port. The tech is called Deep Learning Dynamic Super Resolution, or DLDSR, and Nvidia says you can use it to make "most games" look sharper by running them at a higher resolution than your monitor natively supports. DLDSR builds on Nvidia's Dynamic Super Resolution tech, which has been around for years. Essentially, regular old DSR renders a game at a higher resolution than your monitor can handle and then downscales it to your monitor's native resolution. This leads to an image with better sharpness but usually comes with a dip in performance (you are asking your GPU to do more work, after all). So, for instance, if you had a graphics card capable of running a game at 4K but only had a 1440p monitor, you could use DSR to get a boost in clarity.


Nvidia DLSS Is Building a Walled Garden, and It's Working

#artificialintelligence

I just reviewed AMD's new Radeon RX 6600, which is a budget GPU that squarely targets 1080p gamers. It's a decent option, especially in a time when GPU prices are through the roof, but it exposed a trend that I've seen brewing over the past few graphics card launches. Nvidia's Deep Learning Super Sampling (DLSS) tech is too good to ignore, no matter how powerful the competition is from AMD. In a time when resolutions and refresh rates continue to climb, and demanding features like ray tracing are becoming the norm, upscaling is essential to run the latest games in their full glory. AMD offers an alternative to DLSS in the form of FidelityFX Super Resolution (FSR).


AI Fueling a Technological Revolution in Africa

#artificialintelligence

AI is at play on a global stage, and local developers are stealing the show. Grassroot communities are essential to driving AI innovation, according to Kate Kallot, head of emerging areas at NVIDIA. On its opening day, Kallot gave a keynote speech at the largest AI Expo Africa to date, addressing a virtual crowd of 10,000 people. She highlighted how AI can fuel technological and creative revolutions around the world. Kallot also shared how NVIDIA supports developers in emerging markets to build and scale their AI projects, including through the NVIDIA Developer Program, which has more than 2.5 million members; the NVIDIA Inception Program, which offers go-to-market support, expertise and technology for AI, data science and HPC startups; and the NVIDIA Deep Learning Institute, which offers educational resources for anyone who wants to learn about all things AI. "I hope to inspire you on ways to fuel your own applications and help advance the African AI revolution," Kallot said.


How to Use NVIDIA GPU Accelerated Libraries - KDnuggets

#artificialintelligence

If you are working on an AI project, then it's time to take advantage of NVIDIA GPU accelerated libraries if you aren't doing so already. It wasn't until the late 2000s when AI projects became viable with the assistance of neural networks trained by GPUs to drastically speed up the process. Since that time, NVIDIA has been creating some of the best GPUs for deep learning, allowing GPU accelerated libraries to become a popular choice for AI projects. If you are wondering how you can take advantage of NVIDIA GPU accelerated libraries for your AI projects, this guide will help answer questions and get you started on the right path. When it comes to AI or, more broadly, machine learning, using GPU accelerated libraries is a great option.


Nvidia's Canvas AI Painting Tool Turns Color Blobs into Realistic Imaginaries

#artificialintelligence

Nvidia's New Canvas tool, which is now available as a beta-free version, is a real-time painting tool GauGAN that can be used by anyone with an NVIDIA RTX GPU. It allows creators to create a rough landscape by painting blobs and then it fills it with convincingly photorealistic content. Each distinct color on the tool represents a different type of feature: water, mountains, grass, ruins, and others. The crude sketch is passed to a generative adversarial network when colors are blobbed onto the canvas. GANs essentially pass content to AI creator that tries to make a realistic image.


AMD sees another six months of video game chip shortages

ZDNet

The world's hunger for graphics chips for PCs and gaming consoles means there will be short supply in many markets through the first half of this year, according to chip titan Advanced Micro Devices. "We did have some supply constraints as we ended the year," said CEO Lisa Su on a conference call with analysts Tuesday evening following the company's report of stronger-than-expected Q4 results. Shortages of supply "were primarily, I would say, in the PC market, the low end of the PC market and in the gaming markets," said Su. "That being said, I think we're getting great support from our manufacturing partners. The industry does need to increase the overall capacity levels. And so we do see some tightness through the first half of the year." AMD's Radeon RX 6900 XT graphics card, introduced last quarter.


How to Download, Install and use Nvidia GPU for tensorflow on windows

#artificialintelligence

This article was published as a part of the Data Science Blogathon. "Graphics has lately made a great shift towards machine learning, which itself is about understanding data" _ Jefferson Han, Founder and Chief Scientist of Perceptive Pixel CPU's can fetch data at a faster rate but cannot process more data at a time as CPU has to make many iterations to main memory to perform a simple task. Campus executes jobs sequentially and has fewer cores but GPUs come with hundreds of smaller cores working in parallel making GPU a highly parallel architecture thereby improving the performance. Tensorflow GPU can work only if you have a CUDA enabled graphics card. All the newer NVidia graphics cards within the past three or four years have CUDA enabled.