Collaborating Authors


Fast Fine-Tuning of AI Transformers Using RAPIDS Machine Learning


In recent years, transformers have emerged as a powerful deep neural network architecture that has been proven to beat the state of the art in many application domains, such as natural language processing (NLP) and computer vision. This post uncovers how you can achieve maximum accuracy with the fastest training time possible when fine-tuning transformers. We demonstrate how the cuML support vector machine (SVM) algorithm, from the RAPIDS Machine Learning library, can dramatically accelerate this process. CuML SVM on GPU is 500x faster than the CPU-based implementation. This approach uses SVM heads instead of the conventional multi-layer perceptron (MLP) head, making it possible to fine-tune with precision and ease.

What Is Transfer Learning?


Editor's note: The name of the NVIDIA Transfer Learning Toolkit was changed to NVIDIA TAO Toolkit in August 2021. All references to the name have been updated in this blog. You probably have a career. But hit the books for a graduate degree or take online certificate courses by night, and you could start a new career building on your past experience. Transfer learning is the same idea.

Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.

NVIDIA Develops Incredible AI That Can Turn Words Into Images


They say a picture is worth a thousand words, but with NVIDIA's new GauGAN2 artificial intelligence (AI), you can create a picture with just three to four words. The deep-learning model is able to generate a photorealistic image of different scenes using text, sentences, or phrases that you enter into the program. For example, if you type "Highway surrounded by mountains" in the input text field, the AI will generate an image of a highway surrounded by mountains. You can change any keyword, eg. "desert" instead of "mountains", and a new image will be created based on your text input.

The 10 Coolest AI Chips Of 2021


The demand for AI applications, the ever-growing nature of deep learning models and their increasing complexity mean there is plenty of room for competition when it comes to making computer chips more powerful and efficient for such workloads. GPU juggernaut Nvidia may hold the AI chip crown in multiple respects, but that isn't stopping semiconductor companies both large and small from designing their own AI chip architectures that offer differentiation in terms of features, performance and targeted applications. What follows are the 10 coolest AI chips of 2021, which includes processors from semiconductor giants Intel, AMD and Nvidia, computing juggernaut IBM, cloud service providers Google Cloud and Amazon Web Services and AI chip startups Cerebras Systems, Mythic and Syntiant.

Raspberry Pi Pico machine learning inference tutorial


If you are interested in learning more about machine learning inference on the recently launched Raspberry Pi Pico microcontroller, you may be interested in a new project published to the Classed as an intermediate skill level project and taking approximately 60 minutes, Maslov covers the basics of setting up a Seeed Grove Shield for Pi Pico v1.0 and Edge Impulse. Edge Impulse is a platform that enables developers to easily train and deploy deep learning models on embedded devices. Check out the video below to learn more. "This is another article in know-how series, which focuses solely on a specific feature or technique and today Iíll tell you how to use neural network trained with Edge Impulse with new Raspberry Pico 2040. Also make sure to watch the tutorial video with step-by-step instructions."

NVIDIA's AI Creates Realistic Photos Based Only on Text Descriptions


NVIDIA's GauGAN2 artificial intelligence (AI) can now use simple written phrases to generate a fitting photorealistic image. The deep-learning model is able to craft different scenes in just three or four words. GauGAN is NVIDIA's AI program that was used to turn simple doodles into photorealistic masterpieces in 2019, a technology that was eventually turned into the NVIDIA Canvas app earlier this year. Now NVIDIA has advanced the AI even further to where it only needs a brief description in order to generate a "photo." NVIDIA says that the deep learning model behind GauGAH allows anyone to make beautiful scenes, and now it's even easier than it ever has been.

GraSSNet: Graph Soft Sensing Neural Networks Artificial Intelligence

In the era of big data, data-driven based classification has become an essential method in smart manufacturing to guide production and optimize inspection. The industrial data obtained in practice is usually time-series data collected by soft sensors, which are highly nonlinear, nonstationary, imbalanced, and noisy. Most existing soft-sensing machine learning models focus on capturing either intra-series temporal dependencies or pre-defined inter-series correlations, while ignoring the correlation between labels as each instance is associated with multiple labels simultaneously. In this paper, we propose a novel graph based soft-sensing neural network (GraSSNet) for multivariate time-series classification of noisy and highly-imbalanced soft-sensing data. The proposed GraSSNet is able to 1) capture the inter-series and intra-series dependencies jointly in the spectral domain; 2) exploit the label correlations by superimposing label graph that built from statistical co-occurrence information; 3) learn features with attention mechanism from both textual and numerical domain; and 4) leverage unlabeled data and mitigate data imbalance by semi-supervised learning. Comparative studies with other commonly used classifiers are carried out on Seagate soft sensing data, and the experimental results validate the competitive performance of our proposed method.

Training With Mixed Precision :: NVIDIA Deep Learning Performance Documentation


Mixed precision training offers significant computational speedup by performing operations in half-precision format, while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in the Volta and Turing architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. The ability to train deep learning networks with lower precision was introduced in the Pascal architecture and first supported in CUDA 8 in the NVIDIA Deep Learning SDK. Mixed precision is the combined use of different numerical precisions in a computational method. Half precision (also known as FP16) data compared to higher precision FP32 vs FP64 reduces memory usage of the neural network, allowing training and deployment of larger networks, and FP16 data transfers take less time than FP32 or FP64 transfers.

Soft Sensing Transformer: Hundreds of Sensors are Worth a Single Word Artificial Intelligence

With the rapid development of AI technology in recent years, there have been many studies with deep learning models in soft sensing area. However, the models have become more complex, yet, the data sets remain limited: researchers are fitting million-parameter models with hundreds of data samples, which is insufficient to exercise the effectiveness of their models and thus often fail to perform when implemented in industrial applications. To solve this long-lasting problem, we are providing large scale, high dimensional time series manufacturing sensor data from Seagate Technology to the public. We demonstrate the challenges and effectiveness of modeling industrial big data by a Soft Sensing Transformer model on these data sets. Transformer is used because, it has outperformed state-of-the-art techniques in Natural Language Processing, and since then has also performed well in the direct application to computer vision without introduction of image-specific inductive biases. We observe the similarity of a sentence structure to the sensor readings and process the multi-variable sensor readings in a time series in a similar manner of sentences in natural language. The high-dimensional time-series data is formatted into the same shape of embedded sentences and fed into the transformer model. The results show that transformer model outperforms the benchmark models in soft sensing field based on auto-encoder and long short-term memory (LSTM) models. To the best of our knowledge, we are the first team in academia or industry to benchmark the performance of original transformer model with large-scale numerical soft sensing data.