Goto

Collaborating Authors

Results


Researchers From Stanford and NVIDIA Introduce A Tri-Plane-Based 3D GAN Framework To Enable High-Resolution Geometry-Aware Image Synthesis

#artificialintelligence

Generative Adversarial Networks (GANs) have been one of the main hypes of recent years. Based on the famous generator-discriminator mechanism, their very simple functioning has driven the research to continuously improve the former architecture. The peak in image generation has been reached by StyleGANs, which can produce astonishingly realistic and high-quality images, able to fool even humans. While the generation of new samples has achieved excellent results in the 2D domain, 3D GANs are still highly inefficient. If the exact mechanism of 2D GANs is applied in the 3D environment, the computational effort is too high since 3D data is tough to manipulate for current GPUs.


Nvidia Canvas uses GauGAN2 AI model to achieve 4x resolution boost

#artificialintelligence

Nvidia has updated its Canvas real-time painting tool with a new AI model based on GauGAN2 research to achieve a 4x resolution boost. Canvas enables artists to turn simple brushstrokes into realistic landscapes filled with materials including water, grass, snow, mountains, and more. The idea is that concepts can be turned into final versions far quicker than ever before. The free software, which is still in beta, is the perfect example of how AI complements and enhances human abilities rather than replaces. Canvas' latest update achieves close to photorealism with greater definition and fewer artifacts: The software delivers images in up to 1K pixel resolution and the results can be exported to apps like Adobe Photoshop to integrate with an artist's existing workflow. GauGAN2 combines segmentation mapping, inpainting, and text-to-image generation in a single model.


Data Science With Raspberry Pi and Smart Sensors

#artificialintelligence

Ever thought that IOT can be used with Data Science? Most probably you did not even think of it(If you did bravo!). I am about to share with you How an IOT device works and how we can benefit from it in Data Science. Before getting in, I want to tell you that I will mostly talk about Raspberry Pi, a mini-computer and about different sensors and addons that we can use with it. There are a lot of IoT devices out there, but I will talk about this particular device. IoT stands for "INTERNET OF THINGS".


NVIDIA Develops AI That Can Remove Noise, Grain, And Even Watermarks From Photos

#artificialintelligence

Researchers from NVIDIA, Aalto University, and MIT have developed an AI that can remove noise from grainy photos and automatically enhance them. This technology can be beneficial in several real-world situations where it is difficult to obtain clear image data like MRI scans, astronomical imaging, and more. Existing noise-reduction AI systems require both noisy and clean input images, but NVIDIA's AI can restore images without being shown what the noise-free image looks like. It just needs to look at examples of corrupted images. The researchers trained the AI on 50,000 images and the deep-learning algorithm was able to produce impressive results.


The NVIDIA PilotNet Experiments

arXiv.org Artificial Intelligence

Four years ago, an experimental system known as PilotNet became the first NVIDIA system to steer an autonomous car along a roadway. This system represents a departure from the classical approach for self-driving in which the process is manually decomposed into a series of modules, each performing a different task. In PilotNet, on the other hand, a single deep neural network (DNN) takes pixels as input and produces a desired vehicle trajectory as output; there are no distinct internal modules connected by human-designed interfaces. We believe that handcrafted interfaces ultimately limit performance by restricting information flow through the system and that a learned approach, in combination with other artificial intelligence systems that add redundancy, will lead to better overall performing systems. We continue to conduct research toward that goal. This document describes the PilotNet lane-keeping effort, carried out over the past five years by our NVIDIA PilotNet group in Holmdel, New Jersey. Here we present a snapshot of system status in mid-2020 and highlight some of the work done by the PilotNet group.


Lenovo's Google-powered Smart Clock drops to $39 at Walmart

Engadget

If you're waiting for Amazon Prime Day to kick off tomorrow, you may want to take advantage of the deals that other retailers already have going on. Walmart has already kicked off its own "anti-Prime Day" savings event and with it comes the best price we've seen on the Lenovo Smart Clock. Right now, Walmart has the smart alarm clock for $39, which is $1 cheaper than its previous low and 50 percent off its normal price. This little gadget has gotten quite popular since its release last year. We gave it a score of 87 for its charming design, ambient light sensor, sunrise alarm feature and lack of camera.


Accelerating Medical Image Segmentation with NVIDIA Tensor Cores and TensorFlow 2 NVIDIA Developer Blog

#artificialintelligence

Medical image segmentation is a hot topic in the deep learning community. Proof of that is the number of challenges, competitions, and research projects being conducted in this area, which only rises year over year. Among all the different approaches to this problem, U-Net has become the backbone of many of the top-performing solutions for both 2D and 3D segmentation tasks. This is due to its simplicity, versatility, and effectiveness. When practitioners are confronted with a new segmentation task, the first step commonly is to use an existent implementation of U-Net as a backbone.


Seagate Transforms Manufacturing with Deep Learning from Edge to Cloud

#artificialintelligence

Sign in to report inappropriate content. "This video is about NVIDIA, HPE Edgeline and Apollo systems that help factories in leveraging AI (Artificial Intelligence) to identify real-time data patterns that people might miss.


Project14 Vision Thing: Build Things Using Graphics, AI, Computer Vision, & Beyond!

#artificialintelligence

Enter Your Project for a chance to win an Oscilloscope Grand Prize Package for the Most Creative Vision Thing Project! There's a lot of variety with how you choose to implement your project. It's a great opportunity to do something creative that stretches the imagination of what hardware can do. Your project can be either a vision based project involving anything that is related to Computer Vision and Machine Learning, Camera Vision and AI based projects, Deep Learning, using hardware such as the Nvidia Jetson Nano, Pi with Intel Compute Stick, Edge TPU, etc. as vimarsh_ and aabhas suggested. Or, it can be a graphics project involving something graphical such as adding a graphical display to a microcontroller, image processing on a microcontroller, image recognition interface a camera to a microcontroller, or FPGA - camera interfacing/image processing/graphical display as dougw suggested.


These faces show how far AI image generation has advanced in just four years

#artificialintelligence

Developments in artificial intelligence move at a startling pace -- so much so that it's often difficult to keep track. But one area where progress is as plain as the nose on your AI-generated face is the use of neural networks to create fake images. In the image above you can see what four years of progress in AI image generation looks like. The crude black-and-white faces on the left are from 2014, published as part of a landmark paper that introduced the AI tool known as the generative adversarial network (GAN). The color faces on the right come from a paper published earlier this month, which uses the same basic method but is clearly a world apart in terms of image quality.