Goto

Collaborating Authors

Results


Artificial Intelligence at Intel - Three Current Applications

#artificialintelligence

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders. Intel was founded in 1968 by Robert Noyce and Gordon Moore, who had previously been among the founders of Fairchild Semiconductors. Today, Intel employs over 121,000 people worldwide. In its 2021 annual report, the company reported revenues of $79 billion.


ALCF Developer Session June 30: Profiling Deep Learning Applications with NVIDIA Nsight

#artificialintelligence

On Thursday, June 30 from 1-2 pm Centeral Time, the Argonne Leadership Computing Facility will hold a developer session on performance analysis …


Neural Network Generates Global Tree Height Map, Reveals Carbon Stock Potential

#artificialintelligence

A new study from researchers at ETH Zurich's EcoVision Lab is the first to produce an interactive Global Canopy Height map. Using a newly developed deep learning algorithm that processes publicly available satellite images, the study could help scientists identify areas of ecosystem degradation and deforestation. The work could also guide sustainable forest management by identifying areas for prime carbon storage--a cornerstone in mitigating climate change. "Global high-resolution data on vegetation characteristics are needed to sustainably manage terrestrial ecosystems, mitigate climate change, and prevent biodiversity loss. With this project, we aim to fill the missing data gaps by merging data from two space missions with the help of deep learning," said Konrad Schindler, a Professor in the Department of Civil, Environmental, and Geomatic Engineering at ETH Zurich.


What is Neural Network Libraries container available in NVIDIA GPU Cloud - World-class cloud from India

#artificialintelligence

With the applications of artificial intelligence and deep learning (DL) on the rise, organisations seek easy and faster solutions to the problems presented by AI and deep learning. The challenge has always been about how to imitate the human brain and be able to deploy its logic artificially. Result: Neural Networks that are essentially designed on the human brain wiring. Neural Networks can be described as a set of algorithms that are loosely modelled on human brain. They are designed to recognise patterns.


Low-Code AI Model Development with the NVIDIA TAO Toolkit

#artificialintelligence

Chintan Shah is a senior product manager at NVIDIA, focusing on AI products for intelligent video analytics. Chintan manages an end-to-end toolkit for efficient deep learning training and real-time inference. Previously, he developed hardware IPs for NVIDIA GPUs. Chintan holds a master's degree in electrical engineering from North Carolina State University.


A Night to Behold: Researchers Use Deep Learning to Bring Color to Night Vision

#artificialintelligence

A team of scientists has used GPU-accelerated deep learning to show how color can be brought to night-vision systems. In a paper published this week in the journal PLOS One, a team of researchers at the University of California, Irvine led by Professor Pierre Baldi and Dr. Andrew Browne, describes how they reconstructed color images of photos of faces using an infrared camera. The study is a step toward predicting and reconstructing what humans would see using cameras that collect light using imperceptible near-infrared illumination. The study's authors explain that humans see light in the so-called "visible spectrum," or light with wavelengths of between 400 and 700 nanometers. Typical night vision systems rely on cameras that collect infrared light outside this spectrum that we can't see.


Fast Fine-Tuning of AI Transformers Using RAPIDS Machine Learning

#artificialintelligence

In recent years, transformers have emerged as a powerful deep neural network architecture that has been proven to beat the state of the art in many application domains, such as natural language processing (NLP) and computer vision. This post uncovers how you can achieve maximum accuracy with the fastest training time possible when fine-tuning transformers. We demonstrate how the cuML support vector machine (SVM) algorithm, from the RAPIDS Machine Learning library, can dramatically accelerate this process. CuML SVM on GPU is 500x faster than the CPU-based implementation. This approach uses SVM heads instead of the conventional multi-layer perceptron (MLP) head, making it possible to fine-tune with precision and ease.


An Interview with Nvidia CEO Jensen Huang about Manufacturing Intelligence

#artificialintelligence

It took a few moments to realize what was striking about the opening video for Nvidia's GTC conference: the complete absence of humans. That the video ended with Jensen Huang, the founder and CEO of Nvidia, is the exception that accentuates the takeaway. On the one hand, the theme of Huang's keynote was the idea of AI creating AI via machine learning; he called the idea "intelligence manufacting": None of these capabilities were remotely possible a decade ago. Accelerated computing, at data center scale, and combined with machine learning, has sped up computing by a million-x. Accelerated computing has enabled revolutionary AI models like the transformer, and made self-supervised learning possible. AI has fundamentally changed what software can make, and how you make software. Companies are processing and refining their data, making AI software, becoming intelligence manufacturers. Their data centers are becoming AI factories. The first wave of AI learned perception and inference, like recognizing images, understanding speech, recommending a video, or an item to buy. The next wave of AI is robotics: AI planning actions. Digital robots, avatars, and physical robots will perceive, plan, and act, and just as AI frameworks like TensorFlow and PyTorch have become integral to AI software, Omniverse will be essential to making robotics software. Omniverse will enable the next wave of AI. We will talk about the next million-x, and other dynamics shaping our industry, this GTC. Over the past decade, Nvidia-accelerated computing delivered a million-x speed-up in AI, and started the modern AI revolution. Now AI will revolutionize all industries. The CUDA libraries, the Nvidia SDKs, are at the heart of accelerated computing. With each new SDK, new science, new applications, and new industries can tap into the power of Nvidia computing.


Nvidia reveals H100 GPU for AI and teases 'world's fastest AI supercomputer'

#artificialintelligence

Nvidia has announced a slew of AI-focused enterprise products at its annual GTC conference. They include details of its new silicon architecture, Hopper; the first datacenter GPU built using that architecture, the H100; a new Grace CPU "superchip"; and vague plans to build what the company claims will be the world's fastest AI supercomputer, named Eos. Nvidia has benefited hugely from the AI boom of the last decade, with its GPUs proving a perfect match for popular, data-intensive deep learning methods. As the AI sector's demand for data compute grows, says Nvidia, it wants to provide more firepower. In particular, the company stressed the popularity of a type of machine learning system known as a Transformer.


Nvidia CEO Touts a 'Million X' Speedup in AI

#artificialintelligence

A decade ago, Google talked about "thinking in 10x." Whether it's Moore's Law or the current rate of inflation, Nvidia's CEO Jensen Huang has one-upped his fellow Silicon Valley technologists by thinking in million X. According to Huang, it will have a monumental impact on biology and chemistry in the near future. Huang used a good deal of his one-hour-and-42 minute keynote at Nvidia's GPU Technology Conference (GTC) this morning to tout the company's latest GPU architecture, dubbed the H100 Hopper. Built on a thinner 4 nanometer process and featuring 80 billion transistors (68% more than the previous generation A100 GPU), the Hopper immediately becomes the premiere processor to run AI workloads.