Goto

Collaborating Authors

Results


Neural Network Generates Global Tree Height Map, Reveals Carbon Stock Potential

#artificialintelligence

A new study from researchers at ETH Zurich's EcoVision Lab is the first to produce an interactive Global Canopy Height map. Using a newly developed deep learning algorithm that processes publicly available satellite images, the study could help scientists identify areas of ecosystem degradation and deforestation. The work could also guide sustainable forest management by identifying areas for prime carbon storage--a cornerstone in mitigating climate change. "Global high-resolution data on vegetation characteristics are needed to sustainably manage terrestrial ecosystems, mitigate climate change, and prevent biodiversity loss. With this project, we aim to fill the missing data gaps by merging data from two space missions with the help of deep learning," said Konrad Schindler, a Professor in the Department of Civil, Environmental, and Geomatic Engineering at ETH Zurich.


What is Neural Network Libraries container available in NVIDIA GPU Cloud - World-class cloud from India

#artificialintelligence

With the applications of artificial intelligence and deep learning (DL) on the rise, organisations seek easy and faster solutions to the problems presented by AI and deep learning. The challenge has always been about how to imitate the human brain and be able to deploy its logic artificially. Result: Neural Networks that are essentially designed on the human brain wiring. Neural Networks can be described as a set of algorithms that are loosely modelled on human brain. They are designed to recognise patterns.


What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains

#artificialintelligence

Entrepreneur Jason Mars calls conversation our "first technology." Before humans invented the wheel, crafted a spear or tamed fire, we mastered the superpower of talking to one another. That makes conversation an incredibly important tool. But if you've dealt with the automated chatbots deployed by the customer service arms of just about any big organization lately -- whether banks or airlines -- you also know how hard it can be to get it right. Deep learning AI and new techniques such as zero-shot learning promise to change that.


Low-Code AI Model Development with the NVIDIA TAO Toolkit

#artificialintelligence

Chintan Shah is a senior product manager at NVIDIA, focusing on AI products for intelligent video analytics. Chintan manages an end-to-end toolkit for efficient deep learning training and real-time inference. Previously, he developed hardware IPs for NVIDIA GPUs. Chintan holds a master's degree in electrical engineering from North Carolina State University.


Modern Computing: A Short History, 1945-2022

#artificialintelligence

Inspired by A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi. But the selection of key events in the journey from ENIAC to Tesla, from Data Processing to Big Data, is mine. This was the first computer made by Apple Computers Inc, which became one of the fastest growing ... [ ] companies in history, launching a number of innovative and influential computer hardware and software products. Most home computer users in the 1970s were hobbyists who designed and assembled their own machines. The Apple I, devised in a bedroom by Steve Wozniak, Steven Jobs and Ron Wayne, was a basic circuit board to which enthusiasts would add display units and keyboards. April 1945 John von Neumann's "First Draft of a Report on the EDVAC," often called the founding document of modern computing, defines "the stored program concept." July 1945 Vannevar Bush publishes "As We May Think," in which he envisions the "Memex," a memory extension device serving as a large personal repository of information that could be instantly retrieved through associative links.


A Night to Behold: Researchers Use Deep Learning to Bring Color to Night Vision

#artificialintelligence

A team of scientists has used GPU-accelerated deep learning to show how color can be brought to night-vision systems. In a paper published this week in the journal PLOS One, a team of researchers at the University of California, Irvine led by Professor Pierre Baldi and Dr. Andrew Browne, describes how they reconstructed color images of photos of faces using an infrared camera. The study is a step toward predicting and reconstructing what humans would see using cameras that collect light using imperceptible near-infrared illumination. The study's authors explain that humans see light in the so-called "visible spectrum," or light with wavelengths of between 400 and 700 nanometers. Typical night vision systems rely on cameras that collect infrared light outside this spectrum that we can't see.


Fast Fine-Tuning of AI Transformers Using RAPIDS Machine Learning

#artificialintelligence

In recent years, transformers have emerged as a powerful deep neural network architecture that has been proven to beat the state of the art in many application domains, such as natural language processing (NLP) and computer vision. This post uncovers how you can achieve maximum accuracy with the fastest training time possible when fine-tuning transformers. We demonstrate how the cuML support vector machine (SVM) algorithm, from the RAPIDS Machine Learning library, can dramatically accelerate this process. CuML SVM on GPU is 500x faster than the CPU-based implementation. This approach uses SVM heads instead of the conventional multi-layer perceptron (MLP) head, making it possible to fine-tune with precision and ease.


NeRF Research Turns 2D Photos Into 3D Scenes

#artificialintelligence

When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds. Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The NVIDIA Research team has developed an approach that accomplishes this task almost instantly -- making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering. NVIDIA applied this approach to a popular new technology called neural radiance fields, or NeRF.


NVIDIA's Instant NeRF: transforming 2D images into 3D scenes in record time - Actu IA

#artificialintelligence

Instant NeRF, a neural network-based technology capable of transforming a set of 2D photos into high-resolution 3D scenes in seconds, was introduced at an NVIDIA GTC session in March. According to the NVIDIA Research team, this would be one of the first models of its kind to combine ultra-fast neural network training and fast rendering. In its press release, NVIDIA recalls the technological revolution that Edwin Land brought on February 21, 1947 by producing an instant photo with a polaroid camera. NVIDIA Research pays tribute to him by recreating an iconic photo of Andy Warhol taking an instant photo, transforming it into a 3D scene using Instant NeRF. Artificial intelligence researchers at NVIDIA Research took the opposite approach with the goal of transforming a set of still images into a 3D digital scene in seconds.


Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion

#artificialintelligence

We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet. We train our network with 3-5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study.