Goto

Collaborating Authors

Results


Nvidia launches a new GPU architecture and the Grace CPU Superchip – TechCrunch

#artificialintelligence

At its annual GTC conference for AI developers, Nvidia today announced its next-gen Hopper GPU architecture and the Hopper H100 GPU, as well as a new data center chip that combines the GPU with a high-performance CPU, which Nvidia calls the "Grace CPU Superchip" (not to be confused with the Grace Hopper Superchip). With Hopper, Nvidia is launching a number of new and updated technologies, but for AI developers, the most important one may just be the architecture's focus on transformer models, which have become the machine learning technique de rigueur for many use cases and which powers models like GPT-3 and asBERT. The new Transformer Engine in the H100 chip promises to speed up model training by up to six times and because this new architecture also features Nvidia's new NVLink Switch system for connecting multiple nodes, large server clusters powered by these chips will be able to scale up to support massive networks with less overhead. "The largest AI models can require months to train on today's computing platforms," Nvidia's Dave Salvator writes in today's announcement. AI, high performance computing and data analytics are growing in complexity with some models, like large language ones, reaching trillions of parameters.


GTC 2022: Nvidia flexes its GPU and platform muscles

#artificialintelligence

Did you miss a session at the Data Summit? Nvidia packed about three years' worth of news into its GPU Technology Conference today. Flamboyant CEO Jensen Huang's 1 hour, 39-minute keynote covered a lot of ground, but the unifying themes to the majority of the two dozen announcements were GPU-centered and Nvidia's platform approach to everything it builds. Most people know Nvidia as the world's largest manufacturer of a graphics processing unit, or GPU. The GPU is a chip that was first used to accelerate graphics in gaming systems.


Why scrapping Nvidia Arm deal is ultimately bad for the industry

ZDNet

The largest proposed semiconductor acquisition in IT history – Nvidia merging with Arm – was called off today due to significant regulatory challenges, with antitrust issues being the main hurdle. The $40 billion deal was initially announced in September 2020, and there has been wide speculation that this would eventually be the outcome based on several factors that I believed were either not true or overblown. Before I get into that, it's important to understand why this deal was so important. Nvidia's core product is the graphics processing unit, or GPU, which was initially used to improve graphics capabilities on computers for uses such as gaming. It just so happens that the architecture of a GPU makes it ideal for other tasks that require accelerated computing, such as real-time graphics rendering, virtual reality, and artificial intelligence.


Nvidia's AI-powered scaling makes old games look better without a huge performance hit

#artificialintelligence

Nvidia's latest game-ready driver includes a tool that could let you improve the image quality of games that your graphics card can easily run, alongside optimizations for the new God of War PC port. The tech is called Deep Learning Dynamic Super Resolution, or DLDSR, and Nvidia says you can use it to make "most games" look sharper by running them at a higher resolution than your monitor natively supports. DLDSR builds on Nvidia's Dynamic Super Resolution tech, which has been around for years. Essentially, regular old DSR renders a game at a higher resolution than your monitor can handle and then downscales it to your monitor's native resolution. This leads to an image with better sharpness but usually comes with a dip in performance (you are asking your GPU to do more work, after all). So, for instance, if you had a graphics card capable of running a game at 4K but only had a 1440p monitor, you could use DSR to get a boost in clarity.


CES 2022: AMD, Intel, and Nvidia make CPUs and GPUs buddy up

ZDNet

Late last year, I wrote about Apple's first M1 series-powered MacBook Pros and how the company spared no opportunity to bring out the big benchmark guns against its previous efforts as well as rivals. At CES, the empires (at least those that rule PC chips) struck back, with AMD, Intel, and Nvidia all announcing new versions of flagships that address the need to deliver more performance more efficiently. Among the techniques they've addressed, they've all performance and efficiency by tapping into the versatility of the Windows ecosystem for new ways for CPUs and GPUs to work together. With AMD having the longest history offering both CPUs and discrete GPUs, it's been no surprise to see the company embrace more intelligent power shifts between the two. The company upped its SmartShift technology for routing computational load between CPU and GPU to SmartShift Max.


Nvidia data center sales grew 55% on demand for artificial intelligence chips

#artificialintelligence

Kress said customers are using the chips for tasks such as understanding human speech and crunching data to offer customer recommendations. Gaming, Nvidia's biggest market, reported $3.2 billion in sales, up 42% from $2.27 billion in the same quarter last year. The company said it was primarily due to increased sales of its GeForce consumer graphics processors, but the company said supply remained limited. Nvidia's gaming graphics cards now have software that prevents them from being used for cryptocurrency mining, the company said. Nvidia introduced dedicated graphics cards for crypto mining earlier this year to help meet some of the demand.


Nvidia DLSS Is Building a Walled Garden, and It's Working

#artificialintelligence

I just reviewed AMD's new Radeon RX 6600, which is a budget GPU that squarely targets 1080p gamers. It's a decent option, especially in a time when GPU prices are through the roof, but it exposed a trend that I've seen brewing over the past few graphics card launches. Nvidia's Deep Learning Super Sampling (DLSS) tech is too good to ignore, no matter how powerful the competition is from AMD. In a time when resolutions and refresh rates continue to climb, and demanding features like ray tracing are becoming the norm, upscaling is essential to run the latest games in their full glory. AMD offers an alternative to DLSS in the form of FidelityFX Super Resolution (FSR).


AI Fueling a Technological Revolution in Africa

#artificialintelligence

AI is at play on a global stage, and local developers are stealing the show. Grassroot communities are essential to driving AI innovation, according to Kate Kallot, head of emerging areas at NVIDIA. On its opening day, Kallot gave a keynote speech at the largest AI Expo Africa to date, addressing a virtual crowd of 10,000 people. She highlighted how AI can fuel technological and creative revolutions around the world. Kallot also shared how NVIDIA supports developers in emerging markets to build and scale their AI projects, including through the NVIDIA Developer Program, which has more than 2.5 million members; the NVIDIA Inception Program, which offers go-to-market support, expertise and technology for AI, data science and HPC startups; and the NVIDIA Deep Learning Institute, which offers educational resources for anyone who wants to learn about all things AI. "I hope to inspire you on ways to fuel your own applications and help advance the African AI revolution," Kallot said.


Nvidia's Canvas AI Painting Tool Turns Color Blobs into Realistic Imaginaries

#artificialintelligence

Nvidia's New Canvas tool, which is now available as a beta-free version, is a real-time painting tool GauGAN that can be used by anyone with an NVIDIA RTX GPU. It allows creators to create a rough landscape by painting blobs and then it fills it with convincingly photorealistic content. Each distinct color on the tool represents a different type of feature: water, mountains, grass, ruins, and others. The crude sketch is passed to a generative adversarial network when colors are blobbed onto the canvas. GANs essentially pass content to AI creator that tries to make a realistic image.


Does using Nvidia's DLSS require a better CPU? Ask an expert

PCWorld

Q: If you want to use DLSS to make a game run faster, does it need a better CPU since the GPU is utilized less? To bring everyone on the same page, let's recap first what DLSS is. Deep Learning Super Sampling, or DLSS, is Nvidia's proprietary technology that uses machine learning and dedicated hardware on the company's RTX cards to internally render games at a lower resolution, then upscale and output it to the desired resolution. The first version of this tech didn't look great, but version 2.0 brought massive improvements. Now, not only does the final result look virtually the same as native rendering in most circumstances, but you also get higher performance--meaning you can either enjoy those higher framerates at higher resolutions, or switch on a resource-intensive setting like raytracing without suffering for it.