catanzaro
Why supercomputers are the unsung heroes of PC gaming
It's funny how things in reality can be so far removed from what we imagined. A classic example of this is how I imagined there to be a horde of scientists at Nvidia HQ hunched over their PCs and all working to train the next generation of Nvidia DLSS algorithms -- between enjoying bouts of Call of Duty with colleagues, of course. But as it turns out that's only part of the story… Yes, there are scientists at Nvidia working on these projects, but doing a large portion of the work in training and developing new DLSS technology for us PC gamers to enjoy is also an AI supercomputer, and it's been doing that non-stop 24/7 for going on six years now. That nugget of information was delivered by Brian Catanzaro, Nvidia's VP of applied deep learning research at CES 2025 in Las Vegas. Catanzaro dropped that gem on stage casually as a throwaway comment while discussing details about DLSS 4. But as it turns out, that reference has been the catalyst for a ton of talk about the topic.
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.49)
- North America > United States > Nevada > Clark County > Las Vegas (0.25)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology (1.00)
How Jensen Huang's Nvidia Is Powering the A.I. Revolution
The revelation that ChatGPT, the astonishing artificial-intelligence chatbot, had been trained on an Nvidia supercomputer spurred one of the largest single-day gains in stock-market history. When the Nasdaq opened on May 25, 2023, Nvidia's value increased by about two hundred billion dollars. A few months earlier, Jensen Huang, Nvidia's C.E.O., had informed investors that Nvidia had sold similar supercomputers to fifty of America's hundred largest companies. By the close of trading, Nvidia was the sixth most valuable corporation on earth, worth more than Walmart and ExxonMobil combined. Huang's business position can be compared to that of Samuel Brannan, the celebrated vender of prospecting supplies in San Francisco in the late eighteen-forties.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > Oregon (0.05)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.05)
- (5 more...)
- Information Technology > Hardware (1.00)
- Education (1.00)
3D Modeling Draws on AI
Graphics rendering has always revolved around a basic premise: faster performance equals a better experience. Of course, graphics processing units (GPUs) that render the complex three-dimensional (3D) images used in video games, augmented reality, and virtual reality can push visual performance only so far before reaching a hardware ceiling. All this has led researchers down the path of artificial intelligence--including the use of neural nets--to unlock speed and quality improvements in 3D graphics. In 2022, for example, Nvidia introduced DLSS 3 (Deep Learning Super Sampling), a neural graphics engine that boosts rendering speed by as much as 530%.a The technology uses machine learning to predict which pixels can be created on the fly using the GPU.
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.09)
- Oceania > Australia (0.05)
- North America > United States > Oregon > Clackamas County > West Linn (0.05)
- Asia > Japan (0.05)
- Information Technology > Hardware (0.39)
- Leisure & Entertainment > Games (0.35)
When is enough data enough for AI and decision making?
The problem and promise of artificial intelligence (AI) is people. This has always been true, whatever our hopes (and fears) of robotic overlords taking over. In AI, and data science more generally, the trick is to blend the best of humans and machines. For some time, the AI industry's cheerleaders have tended to stress the machine side of the equation. But as Spring Health data scientist Elena Dyachkova intimates, data (and the machines behind it) are only as useful as the people interpreting it are smart.
Amplify Partners' Sarah Catanzaro on the evolution of MLOps - RTInsights
Note: This interview was edited and condensed for clarity. As part of our media partnership with Tecton's apply(conf), RTInsights recently had the opportunity to speak with Sarah Catanzaro, General Partner at the venture firm Amplify Partners. The firm has invested in data startups OctoML, Einblick, Hex, among others. Prior to venture capital, she was the Head of Data at Mattermark. She started her career in counterterrorism.
Nvidia makes massive language model available to enterprises
Let the OSS Enterprise newsletter guide your open source journey! At its fall 2021 GPU Technology Conference (GTC) today, Nvidia announced that it's making Megatron 530B, one of the world's largest language models, available to enterprises for training to serve new domains and languages. First detailed in early October, Megatron 530B -- also known as Megatron-Turing Natural Language Generation (MT-NLP) -- contains 530 billion parameters and achieves high accuracy in a broad set of natural language tasks, including reading comprehension, commonsense reasoning, and natural language inference. "Today, we provide recipes for customers to build, train, and customize large language models, including Megatron 530B. This includes scripts, code, and 530B untrained model. Customers can start from smaller models and scale up to larger models as they see fit," Nvidia VP of AI software product management Kari Briski told VentureBeat via email.
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.05)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.05)
- Asia > China > Beijing > Beijing (0.05)
NVIDIA and the battle for the future of AI chips
THERE'S AN APOCRYPHAL story about how NVIDIA pivoted from games and graphics hardware to dominate AI chips – and it involves cats. Back in 2010, Bill Dally, now chief scientist at NVIDIA, was having breakfast with a former colleague from Stanford University, the computer scientist Andrew Ng, who was working on a project with Google. "He was trying to find cats on the internet – he didn't put it that way, but that's what he was doing," Dally says. Ng was working at the Google X lab on a project to build a neural network that could learn on its own. The neural network was shown ten million YouTube videos and learned how to pick out human faces, bodies and cats – but to do so accurately, the system required thousands of CPUs (central processing units), the workhorse processors that power computers. "I said, 'I bet we could do it with just a few GPUs,'" Dally says. GPUs (graphics processing units) are specialised for more intense workloads such as 3D rendering – and that makes them better than CPUs at powering AI. Dally turned to Bryan Catanzaro, who now leads deep learning research at NVIDIA, to make it happen.
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.26)
- Europe > United Kingdom (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- (4 more...)
NVIDIA and the battle for the future of AI chips
THERE'S AN APOCRYPHAL story about how NVIDIA pivoted from games and graphics hardware to dominate AI chips – and it involves cats. Back in 2010, Bill Dally, now chief scientist at NVIDIA, was having breakfast with a former colleague from Stanford University, the computer scientist Andrew Ng, who was working on a project with Google. "He was trying to find cats on the internet – he didn't put it that way, but that's what he was doing," Dally says. Ng was working at the Google X lab on a project to build a neural network that could learn on its own. The neural network was shown ten million YouTube videos and learned how to pick out human faces, bodies and cats – but to do so accurately, the system required thousands of CPUs (central processing units), the workhorse processors that power computers.
The Billion Dollar AI Problem That Just Keeps Scaling
There is a new challenge workload on the horizon, one where few can afford to compete. But for those who can, it will spark a rethink in what is possible from even the most powerful traditional supercomputers. It might sound odd that it can be collected under the banner of language modeling since that invokes speech and text analysis and generation. But emerging workloads and research show how far this is from traditional natural language processing. Over the next several years, language models will likely become far more general purpose, encompassing an unimaginable range of problem types. Being able to have a world described through language and rendered as an image or video, or even asking text-based questions about the world with answers based on a system's understanding of our nuanced reality sounds like science fiction.
GauGAN Turns Doodles into Stunning, Realistic Landscapes NVIDIA Blog
A novice painter might set brush to canvas aiming to create a stunning sunset landscape -- craggy, snow-covered peaks reflected in a glassy lake -- only to end up with something that looks more like a multi-colored inkblot. But a deep learning model developed by NVIDIA Research can do just the opposite: it turns rough doodles into photorealistic masterpieces with breathtaking ease. The tool leverages generative adversarial networks, or GANs, to convert segmentation maps into lifelike images. The interactive app using the model, in a lighthearted nod to the post-Impressionist painter, has been christened GauGAN. GauGAN could offer a powerful tool for creating virtual worlds to everyone from architects and urban planners to landscape designers and game developers.