A new study from researchers at ETH Zurich's EcoVision Lab is the first to produce an interactive Global Canopy Height map. Using a newly developed deep learning algorithm that processes publicly available satellite images, the study could help scientists identify areas of ecosystem degradation and deforestation. The work could also guide sustainable forest management by identifying areas for prime carbon storage--a cornerstone in mitigating climate change. "Global high-resolution data on vegetation characteristics are needed to sustainably manage terrestrial ecosystems, mitigate climate change, and prevent biodiversity loss. With this project, we aim to fill the missing data gaps by merging data from two space missions with the help of deep learning," said Konrad Schindler, a Professor in the Department of Civil, Environmental, and Geomatic Engineering at ETH Zurich.
With the applications of artificial intelligence and deep learning (DL) on the rise, organisations seek easy and faster solutions to the problems presented by AI and deep learning. The challenge has always been about how to imitate the human brain and be able to deploy its logic artificially. Result: Neural Networks that are essentially designed on the human brain wiring. Neural Networks can be described as a set of algorithms that are loosely modelled on human brain. They are designed to recognise patterns.
Chintan Shah is a senior product manager at NVIDIA, focusing on AI products for intelligent video analytics. Chintan manages an end-to-end toolkit for efficient deep learning training and real-time inference. Previously, he developed hardware IPs for NVIDIA GPUs. Chintan holds a master's degree in electrical engineering from North Carolina State University.
A team of scientists has used GPU-accelerated deep learning to show how color can be brought to night-vision systems. In a paper published this week in the journal PLOS One, a team of researchers at the University of California, Irvine led by Professor Pierre Baldi and Dr. Andrew Browne, describes how they reconstructed color images of photos of faces using an infrared camera. The study is a step toward predicting and reconstructing what humans would see using cameras that collect light using imperceptible near-infrared illumination. The study's authors explain that humans see light in the so-called "visible spectrum," or light with wavelengths of between 400 and 700 nanometers. Typical night vision systems rely on cameras that collect infrared light outside this spectrum that we can't see.
In recent years, transformers have emerged as a powerful deep neural network architecture that has been proven to beat the state of the art in many application domains, such as natural language processing (NLP) and computer vision. This post uncovers how you can achieve maximum accuracy with the fastest training time possible when fine-tuning transformers. We demonstrate how the cuML support vector machine (SVM) algorithm, from the RAPIDS Machine Learning library, can dramatically accelerate this process. CuML SVM on GPU is 500x faster than the CPU-based implementation. This approach uses SVM heads instead of the conventional multi-layer perceptron (MLP) head, making it possible to fine-tune with precision and ease.
It took a few moments to realize what was striking about the opening video for Nvidia's GTC conference: the complete absence of humans. That the video ended with Jensen Huang, the founder and CEO of Nvidia, is the exception that accentuates the takeaway. On the one hand, the theme of Huang's keynote was the idea of AI creating AI via machine learning; he called the idea "intelligence manufacting": None of these capabilities were remotely possible a decade ago. Accelerated computing, at data center scale, and combined with machine learning, has sped up computing by a million-x. Accelerated computing has enabled revolutionary AI models like the transformer, and made self-supervised learning possible. AI has fundamentally changed what software can make, and how you make software. Companies are processing and refining their data, making AI software, becoming intelligence manufacturers. Their data centers are becoming AI factories. The first wave of AI learned perception and inference, like recognizing images, understanding speech, recommending a video, or an item to buy. The next wave of AI is robotics: AI planning actions. Digital robots, avatars, and physical robots will perceive, plan, and act, and just as AI frameworks like TensorFlow and PyTorch have become integral to AI software, Omniverse will be essential to making robotics software. Omniverse will enable the next wave of AI. We will talk about the next million-x, and other dynamics shaping our industry, this GTC. Over the past decade, Nvidia-accelerated computing delivered a million-x speed-up in AI, and started the modern AI revolution. Now AI will revolutionize all industries. The CUDA libraries, the Nvidia SDKs, are at the heart of accelerated computing. With each new SDK, new science, new applications, and new industries can tap into the power of Nvidia computing.
Nvidia has announced a slew of AI-focused enterprise products at its annual GTC conference. They include details of its new silicon architecture, Hopper; the first datacenter GPU built using that architecture, the H100; a new Grace CPU "superchip"; and vague plans to build what the company claims will be the world's fastest AI supercomputer, named Eos. Nvidia has benefited hugely from the AI boom of the last decade, with its GPUs proving a perfect match for popular, data-intensive deep learning methods. As the AI sector's demand for data compute grows, says Nvidia, it wants to provide more firepower. In particular, the company stressed the popularity of a type of machine learning system known as a Transformer.
A decade ago, Google talked about "thinking in 10x." Whether it's Moore's Law or the current rate of inflation, Nvidia's CEO Jensen Huang has one-upped his fellow Silicon Valley technologists by thinking in million X. According to Huang, it will have a monumental impact on biology and chemistry in the near future. Huang used a good deal of his one-hour-and-42 minute keynote at Nvidia's GPU Technology Conference (GTC) this morning to tout the company's latest GPU architecture, dubbed the H100 Hopper. Built on a thinner 4 nanometer process and featuring 80 billion transistors (68% more than the previous generation A100 GPU), the Hopper immediately becomes the premiere processor to run AI workloads.
Editor's note: The name of the NVIDIA Transfer Learning Toolkit was changed to NVIDIA TAO Toolkit in August 2021. All references to the name have been updated in this blog. You probably have a career. But hit the books for a graduate degree or take online certificate courses by night, and you could start a new career building on your past experience. Transfer learning is the same idea.
Graphcore, the six-year-old, Bristol, England-based maker of artificial intelligence chips and systems, on Thursday announced a new chip called "Bow" that makes use of two semiconductor die stacked one on top of the other, which it said will speed up applications such as deep learning training by forty percent while cutting energy use. The company also announced updated models of its multi-processor computers, called "IPU-POD," running the Bow chip, which it claims are five times faster than comparable DGX machines from Nvidia at half the price. In a nod to the increasing model size of deep learning neural nets such as the Megatron-Turing, the company said it is working on a computer design, called The Good Computer, which will be capable of handling neural network models that employ 500 trillion parameters, making possible what it terms super-human "ultra-intelligence." The Bow processor is the latest version of what Graphcore refers to as "IPUs," standing for Intelligence Processing Units. The company has previously released two iterations of IPU, the last one being in late 2020.