Goto

Collaborating Authors

Results


Nvidia plans for a more robust Omniverse with avatars, synthetic data

ZDNet

Omniverse Replicator is a simulation framework that produces physically accurate synthetic data to accelerate training of deep neural networks for AI applications. NVIDIA has created Omniverse Replicators for DRIVE Sim - for training of AI perception networks for autonomous vehicles - and for Isaac Sim, for training robots. As enterprises prepare to bring more of their business and operations to the virtual world, Nvidia is building out Omniverse, its platform for extending workflows into the virtual sphere. The latest updates to the platform, introduced during GTC 2021, include Omniverse Avatar, a tool for creating embodied AIs, as well as Omniverse Replicator, a synthetic data-generation engine. Nvidia rolled out Omniverse in open beta last December -- nearly a year before Facebook committed to the concept of a "metaverse" by renaming itself Meta.


Researchers Take Steps Towards Autonomous AI-Powered Exoskeleton Legs

#artificialintelligence

University of Waterloo researchers are using deep learning and computer vision to develop autonomous exoskeleton legs to help users walk, climb stairs, and avoid obstacles. The ExoNet project, described in an early-access paper on "Frontiers in Robotics and AI", fits users with wearable cameras. AI software processes the camera's video stream, and is being trained to recognize surrounding features such as stairs and doorways, and then determine the best movements to take. "Our control approach wouldn't necessarily require human thought," said Brokoslaw Laschowski, Ph.D. candidate in systems design engineering and lead author on the ExoNet project. "Similar to autonomous cars that drive themselves, we're designing autonomous exoskeletons that walk for themselves."


StradVision Joins NVIDIA Inception Program as Premier Partner

#artificialintelligence

StradVision has joined NVIDIA Inception, a virtual accelerator program designed to nurture companies that are revolutionizing industries with advancements in AI and data sciences. Distinguishing itself as a collaborator of choice from among other AI companies, StradVision has also been selected as one of the program's Premier Partners, an exclusive group within NVIDIA Inception's global network of over 6,000 startups. StradVision specializes in AI-based vision processing technology for Advanced Driver-Assistance Systems (ADAS) and Autonomous Vehicles (AVs) via SVNet, their flagship product. It is a lightweight embedded software that allows vehicles to detect and identify objects on the road accurately, even in harsh weather conditions or poor lighting. Thanks to StradVision's patented Deep Neural Network-enabled technology, SVNet can be optimized for any hardware system.


FPGAs could replace GPUs in many deep learning applications

#artificialintelligence

The renewed interest in artificial intelligence in the past decade has been a boon for the graphics cards industry. Companies like Nvidia and AMD have seen a huge boost to their stock prices as their GPUs have proven to be very efficient for training and running deep learning models. Nvidia, in fact, has even pivoted from a pure GPU and gaming company to a provider of cloud GPU services and a competent AI research lab. But GPUs also have inherent flaws that pose challenges in putting them to use in AI applications, according to Ludovic Larzul, CEO and co-founder of Mipsology, a company that specializes in machine learning software. The solution, Larzul says, are field programmable gate arrays (FPGA), an area where his company specializes. FPGA is a type of processor that can be customized after manufacturing, which makes it more efficient than generic processors.


Exploring Energy-Accuracy Tradeoffs in AI Hardware

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is playing an increasingly significant role in our everyday lives. This trend is expected to continue, especially with recent pushes to move more AI to the edge. However, one of the biggest challenges associated with AI on edge devices (mobile phones, unmanned vehicles, sensors, etc.) is their associated size, weight, and power constraints. In this work, we consider the scenario where an AI system may need to operate at less-than-maximum accuracy in order to meet application-dependent energy requirements. We propose a simple function that divides the cost of using an AI system into the cost of the decision making process and the cost of decision execution. For simple binary decision problems with convolutional neural networks, it is shown that minimizing the cost corresponds to using fewer than the maximum number of resources (e.g. convolutional neural network layers and filters). Finally, it is shown that the cost associated with energy can be significantly reduced by leveraging high-confidence predictions made in lower-level layers of the network.


Nvidia makes a clean sweep of MLPerf predictions benchmark for artificial intelligence

#artificialintelligence

Graphics chip giant Nvidia mopped up the floor with its competition in a benchmark set of tests released Wednesday afternoon, demonstrating better performance on a host of artificial intelligence tasks. The benchmark, called MLPerf, announced by the MLPerf organization, an industry consortium that administers the tests, showed Nvidia getting better speed on a variety of tasks that use neural networks, from categorizing images to recommending which products a person might like. Predictions are the part of AI where a trained neural network produces output on real data, as opposed to the training phase when the neural network system is first being refined. Benchmark results on training tasks were announced by MLPerf back in July. Many of the scores on the test results pertain to Nvidia's T4 chip that has been in the market for some time, but even more impressive results were reported for its A100 chips unveiled in May.


Nvidia makes a clean sweep of MLPerf predictions benchmark for artificial intelligence

ZDNet

Graphics chip giant Nvidia mopped up the floor with its competition in a benchmark set of tests released Wednesday afternoon, demonstrating better performance on a host of artificial intelligence tasks. The benchmark, called MLPerf, announced by the MLPerf organization, an industry consortium that administers the tests, showed Nvidia getting better speed on a variety of tasks that use neural networks, from categorizing images to recommending which products a person might like. Predictions are the part of AI where a trained neural network produces output on real data, as opposed to the training phase when the neural network system is first being refined. Benchmark results on training tasks were announced by MLPerf back in July. Many of the scores on the test results pertain to Nvidia's T4 chip that has been in the market for some time, but even more impressive results were reported for its A100 chips unveiled in May.


Demystifying Deep Learning at NVIDIA GTC

#artificialintelligence

If you're worried you didn't know all of these, don't worry, neither did I! But I'm here to help you out:D Convolutional neural networks: It is a special type of Neural Network used effectively for image recognition and classification. They are highly proficient in areas like the identification of objects, faces, and traffic signs apart from generating vision in self-driving cars and robots too. Recurrent neural networks: They help in exhibiting temporal dynamic behavior, i.e they allow previous outputs to be used as inputs through hidden states. They are used in Music generation, Sentiment classification, machine translation.


How Nvidia built Selene, the world's seventh-fastest computer, in three weeks

ZDNet

Five years ago, Nvidia set out to design a supercomputer-class system powerful enough to train and run its own AI models, such as models for autonomous vehicles, but flexible enough to serve just about any deep-learning researcher. After building multiple iterations of its DGX Pods, Nvidia learned valuable lessons about building a system with modular, scalable units. The COVID-19 outbreak brought new challenges for Nvidia, as it set out to build Selene, the fourth generation of its DGX SuperPODs. Reduced staff and building restrictions complicated their task, but Nvidia managed to go from bare racks in the data center to a fully operational system in just three and-a-half weeks. Selene is now a Top 10 supercomputer, the fastest industrial system in the US and the fastest MLPerf machine that's commercially available.


Quickly Embed AI Into Your Projects With Nvidia's Jetson Nano

#artificialintelligence

When opportunity knocks, open the door: No one has taken heed of that adage like Nvidia, which has transformed itself from a company focused on catering to the needs of video gamers to one at the heart of the artificial-intelligence revolution. In 2001, no one predicted that the same processor architecture developed to draw realistic explosions in 3D would be just the thing to power a renaissance in deep learning. But when Nvidia realized that academics were gobbling up its graphics cards, it responded, supporting researchers with the launch of the CUDA parallel computing software framework in 2006. Since then, Nvidia has been a big player in the world of high-end embedded AI applications, where teams of highly trained (and paid) engineers have used its hardware for things like autonomous vehicles. Now the company claims to be making it easy for even hobbyists to use embedded machine learning, with its US $100 Jetson Nano dev kit, which was originally launched in early 2019 and rereleased this March with several upgrades.