Collaborating Authors

NVIDIA Broadcast app is a simple toolset for streamers


NVIDIA is making it easier than ever to look professional in your live stream. The company's new Broadcast app is a streamlined, AI-driven toolset that offers noise removal, virtual background effects and auto-framing. The app will be available in September, running on any RTX GPU. With auto-framing, the app uses AI to automatically follow your face as it bounces around your set. Background effects can be images or video, and the noise removal feature works well enough to extinguish the sound of a hairdryer in the same room.

Nvidia attempts to ease the path to deep learning

AITopics Original Links

Nvidia hopes to bring artificial intelligence to a wider range of applications with an update to its Digits software for designing neural networks. Digits version 2, released Tuesday, comes with a graphical user interface, potentially making it accessible to programmers beyond the typical user-base of academics and developers who specialize in AI, said Ian Buck, Nvidia vice president of accelerated computing. The previous version could be controlled only through the command line, which required knowledge of specific text commands and forced the user to jump to another window to view the results. Nvidia's Digits software provides an easy way to train deep learning artificial intelligence models to do tasks such as recognize images of numbers. Digits has also been enhanced to enable designs that run on more than one processor, enabling up to four processors to work together simultaneously to build a learning model.

Nvidia claims its deep learning platform is 10 times faster than 6 months ago


Nvidia launched hardware and software improvements to its deep learning computing platform that deliver a 10 times performance boost on deep learning workloads compared with the previous generation six months ago. In the past five years, programmers have made huge advances in AI, first by training deep learning neural networks based on existing data. This allows a neural network to recognize an image of a cat, for instance. The second step is inferencing, or applying the learning capability to new data that has never been seen before, like spotting a cat in a picture that the neural network has never been shown. At the GPU Technology Conference (GTC) event in San Jose, California, Nvidia CEO Jensen Huang didn't announce a new graphics processing unit (GPU).

AWS beats Google and Microsoft to launching instances with Nvidia Volta GPUs


Amazon Web Services is the first cloud to launch new compute instances that allow developers to build applications that tap into Nvidia's new generation of Volta GPUs, which are designed to provide high-performance acceleration for applications like AI computation. Companies all over are turning to machine learning to help propel their businesses, but building new models often requires a great deal of computation. The Volta is supposed to be a good deal faster at that than past generations of Nvidia's silicon, and making it available through Amazon's cloud means that companies will be able to get started using them right away. Customers will be able to run instances with up to 8 V100 GPUs, which will be made available initially from AWS's Northern Virginia, Oregon, Ireland, and Tokyo datacenters. Nvidia launched a new GPU Cloud offering alongside AWS, which is designed to provide companies with the most optimized environment for running deep learning applications on top of the company's hardware in a public cloud.

Chipmaking Giant Nvidia 'Arms' Itself For An AI-Driven Future


The ballroom of the Leela Palace Hotel in Bengaluru took on an unusual avatar one day in November 2005: it was transformed, albeit for just a few hours into the boardroom of the iconic Cambridge (UK)-based chip design company ARM (originally Acorn RISC Machines). For the first time in its history, the company was holding its annual meeting of the board of directors outside Britain. "I want our directors to get a feel for how much of the innovation that goes into ARM cores flows from India," Sir Robin Saxby, ARM's co-founder and chairman told me. At that time, ARM had an ecosystem of over 5,000 India-based engineers with dedicated research and development (R&D) centres based in the premises of four Indian partners: Wipro, Mindtree, HCL and Sasken. Coincidentally, nine months before, that same year, another international tech player -- the US Silicon Valley-based Nvidia -- set up its own R&D centre in Bengaluru with an initial team of 50 Indian engineers.