Goto

Collaborating Authors

Results


MosaicML startup from Intel boss counting the cost of AI

#artificialintelligence

A former head of artificial intelligence products at Intel has started a company to help companies cut overhead costs on AI systems. Naveen Rao, CEO and co-founder of MosaicML, previously led Nervana Systems, which was acquired by Intel for $350m. But like many Intel acquisitions, the marriage didn't pan out, and Intel killed the Nervana AI chip last year, after which Rao left the company. MosaicML's open source tools focus on implementing AI systems based on cost, training time, or speed-to-results. They do so by analyzing an AI problem relative to the neural net settings and hardware, which then paves an efficient path to generate optimal settings while reducing electric costs.


NVIDIA AI Releases StyleGAN3: Alias-Free Generative Adversarial Networks

#artificialintelligence

The recent advances in the quality and resolution of Generative adversarial networks (GAN) have seen a rapid improvement. These techniques are used for various applications, including image editing, domain translation, or video generation, to name just some examples. While several ways to control GANs' generative process have been found, there is still not much known about their synthesis abilities. In 2019, Nvidia launched its second version of StyleGAN by fixing artifacts features and further improving generated images' quality. StyleGAN being the first of its type image generation method to generate very real images was open-sourced in February 2019.


Differentiable Hardware

#artificialintelligence

How AI Might Help Revive the Virtuous Cycle of Moore's Law In the wake of the global chip shortage, TSMC has reportedly raised chip prices and delayed the 3nm process. Whether or not it is accurate or indicative of a long-term trend, this kind of news should alert us to the worsening impact of the decline of Moore's Law and compel a rethinking of AI hardware. Would AI hardware be subject to this decline or help reverse it? Suppose we want to revive the virtuous cycle of Moore's Law, in which software and hardware propelled one another, making a modern smartphone more capable than a past-decade warehouse-occupying supercomputer. A popularly accepted post-Moore virtuous cycle, in which bigger data leads to larger models requiring more powerful machines, is not sustainable. We can no longer count on transistor shrinking to build wider and wider parallel processors unless we redefine parallelism. Nor can we rely on Domain-Specific Architecture (DSA) unless it facilitates and adapts to software advancement.


ETH Zurich and NVIDIA's Massively Parallel Deep RL Enables Robots to Learn to Walk in Minutes

#artificialintelligence

A new learned legged locomotion study uses massive parallelism on a single GPU to get robots up and walking on flat terrain in under four minutes, and on uneven terrain in twenty minutes. Although deep reinforcement learning (DRL) has achieved impressive results in robotics, the amount of data required to train a policy increases dramatically with task complexity. One way to improve the quality and time-to-deployment of DRL policies is to use massive parallelism. In the paper Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning, a research team from ETH Zurich and NVIDIA proposes a training framework that enables fast policy generation for real-world robotic tasks using massive parallelism on a single workstation GPU. Compared to previous methods, the approach can reduce training time by multiple orders of magnitude.


NVIDIA Invites Developers To Test Experimental DLSS Models Directly From Company's Supercomputer

#artificialintelligence

NVIDIA recently began inviting developers to test the newest build for DLSS (Deep Learning Super Sampling) and submit their experiences and findings to the developer forum on NVIDIA's site. NVIDIA DLSS is "a deep learning neural network that boosts frame rates and generates beautiful, sharp images for your games. It gives you the performance headroom to maximize ray tracing settings and increase output resolution. DLSS is powered by dedicated AI processors on RTX GPUs called Tensor Cores." NVIDIA is enabling developers to explore and evaluate experimental AI models for Deep Learning Super Sampling (DLSS).


IBM Research Says Analog AI Will Be 100X More Efficient. Yes, 100X.

#artificialintelligence

IBM AI Hardware Research Center has delivered signifiant digital AI logic, and now turns their attention to solving AI problems in an entirely new way. The IBM AI Hardware Research Center is located in the TJ Watson Center near Yorktown Heights, New ... [ ] York. Gary Fritz, Cambrian-AI Research Analyst, contributed to this article. AI is showing up in nearly every aspect of business. Larger and more complex Deep Neural Nets (DNNs) keep delivering ever-more-remarkable results. The challenge, as always, is power and performance.


AI Fueling a Technological Revolution in Africa

#artificialintelligence

AI is at play on a global stage, and local developers are stealing the show. Grassroot communities are essential to driving AI innovation, according to Kate Kallot, head of emerging areas at NVIDIA. On its opening day, Kallot gave a keynote speech at the largest AI Expo Africa to date, addressing a virtual crowd of 10,000 people. She highlighted how AI can fuel technological and creative revolutions around the world. Kallot also shared how NVIDIA supports developers in emerging markets to build and scale their AI projects, including through the NVIDIA Developer Program, which has more than 2.5 million members; the NVIDIA Inception Program, which offers go-to-market support, expertise and technology for AI, data science and HPC startups; and the NVIDIA Deep Learning Institute, which offers educational resources for anyone who wants to learn about all things AI. "I hope to inspire you on ways to fuel your own applications and help advance the African AI revolution," Kallot said.


A Column Streaming-Based Convolution Engine and Mapping Algorithm for CNN-based Edge AI accelerators

arXiv.org Artificial Intelligence

Edge AI accelerators have been emerging as a solution for near customers' applications in areas such as unmanned aerial vehicles (UAVs), image recognition sensors, wearable devices, robotics, and remote sensing satellites. These applications not only require meeting performance targets but also meeting strict area and power constraints due to their portable mobility feature and limited power sources. As a result, a column streaming-based convolution engine has been proposed in this paper that includes column sets of processing elements design for flexibility in terms of the applicability for different CNN algorithms in edge AI accelerators. Comparing to a commercialized CNN accelerator, the key results reveal that the column streaming-based convolution engine requires similar execution cycles for processing a 227 x 227 feature map with avoiding zero-padding penalties.


NVIDIA Research: Tensors Are the Future of Deep Learning

#artificialintelligence

This post discusses tensor methods, how they are used in NVIDIA, and how they are central to the next generation of AI algorithms. Tensors, which generalize matrices to more than two dimensions, are everywhere in modern machine learning. From deep neural networks features to videos or fMRI data, the structure in these higher-order tensors is often crucial. Deep neural networks typically map between higher-order tensors. In fact, it is the ability of deep convolutional neural networks to preserve and leverage local structure that made the current levels of performance possible, along with large datasets and efficient hardware. Tensor methods enable you to preserve and leverage that structure further, for individual layers or whole networks.


La veille de la cybersécurité

#artificialintelligence

Grid, which runs on AWS, supports Lightning and classic machine learning frameworks such as TensorFlow, Keras, PyTorch, Sci-Kit, and others. It also helps users to scale the training of models from the NGC catalogue. NGC catalogue is a curated set of GPU-optimised containers for deep learning, visualisation, and high-performance computing (HPC). PyTorch lightning software and developer environment is available on NGC Catalog. Also, check out GitHub to get started with Grid, NGC, PyTorch Lightning here.