Goto

Collaborating Authors

Results


Nvidia's latest AI tech translates text into landscape images

#artificialintelligence

Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don't exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings. "Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images," Isha Salian, a member of Nvidia's corporate communications team, wrote in a blog post. "Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky."


Here's why a great gaming laptop is the best all-around computer for college

Mashable

If you're tackling a degree in science, technology, engineering, or mathematics, there's nothing more frustrating than a machine that can't keep up with the apps you need for your coursework. Here's where a powerful gaming laptop proves its mettle. With GPU acceleration, your machine delivers super-fast image processing, real-time rendering for complex component designs, and it lets you work quickly and efficiently. For engineering students, this means more interactive, real-time rendering for 3D design and modeling, plus faster solutions and visualization for mechanical, structural, and electrical simulations. For computer science, data science, and economics students, NVIDIA's GeForce RTX 30 Series laptops enable faster data analytics for processing large data sets -- all with efficient training for deep learning and traditional machine learning models for computer vision, natural language processing, and tabular data.


GTC 2021: #1 AI Conference

#artificialintelligence

Yann LeCun is Director of AI Research at Facebook, and Silver Professor of Dara Science, Computer Science, Neural Science, and Electrical Engineering at New York University, affiliated with the NYU Center for Data Science, the Courant Institute of Mathematical Science, the Center for Neural Science, and the Electrical and Computer Engineering Department. He received the Electrical Engineer Diploma from Ecole Superieure d'Ingenieurs en Electrotechnique et Electronique (ESIEE), Paris in 1983, and a PhD in Computer Science from Universite Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, NJ in 1988. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU as a professor in 2003, after a brief period as a Fellow of the NEC Research Institute in Princeton. From 2012 to 2014 he directed NYU's initiative in data science and became the founding director of the NYU Center for Data Science.


NVIDIA Develops AI That Can Remove Noise, Grain, And Even Watermarks From Photos

#artificialintelligence

Researchers from NVIDIA, Aalto University, and MIT have developed an AI that can remove noise from grainy photos and automatically enhance them. This technology can be beneficial in several real-world situations where it is difficult to obtain clear image data like MRI scans, astronomical imaging, and more. Existing noise-reduction AI systems require both noisy and clean input images, but NVIDIA's AI can restore images without being shown what the noise-free image looks like. It just needs to look at examples of corrupted images. The researchers trained the AI on 50,000 images and the deep-learning algorithm was able to produce impressive results.


Accelerating Medical Image Segmentation with NVIDIA Tensor Cores and TensorFlow 2 NVIDIA Developer Blog

#artificialintelligence

Medical image segmentation is a hot topic in the deep learning community. Proof of that is the number of challenges, competitions, and research projects being conducted in this area, which only rises year over year. Among all the different approaches to this problem, U-Net has become the backbone of many of the top-performing solutions for both 2D and 3D segmentation tasks. This is due to its simplicity, versatility, and effectiveness. When practitioners are confronted with a new segmentation task, the first step commonly is to use an existent implementation of U-Net as a backbone.


Project14 Vision Thing: Build Things Using Graphics, AI, Computer Vision, & Beyond!

#artificialintelligence

Enter Your Project for a chance to win an Oscilloscope Grand Prize Package for the Most Creative Vision Thing Project! There's a lot of variety with how you choose to implement your project. It's a great opportunity to do something creative that stretches the imagination of what hardware can do. Your project can be either a vision based project involving anything that is related to Computer Vision and Machine Learning, Camera Vision and AI based projects, Deep Learning, using hardware such as the Nvidia Jetson Nano, Pi with Intel Compute Stick, Edge TPU, etc. as vimarsh_ and aabhas suggested. Or, it can be a graphics project involving something graphical such as adding a graphical display to a microcontroller, image processing on a microcontroller, image recognition interface a camera to a microcontroller, or FPGA - camera interfacing/image processing/graphical display as dougw suggested.


These faces show how far AI image generation has advanced in just four years

#artificialintelligence

Developments in artificial intelligence move at a startling pace -- so much so that it's often difficult to keep track. But one area where progress is as plain as the nose on your AI-generated face is the use of neural networks to create fake images. In the image above you can see what four years of progress in AI image generation looks like. The crude black-and-white faces on the left are from 2014, published as part of a landmark paper that introduced the AI tool known as the generative adversarial network (GAN). The color faces on the right come from a paper published earlier this month, which uses the same basic method but is clearly a world apart in terms of image quality.


Deep Learning-Enabled Image Recognition For Faster Insights

#artificialintelligence

More than two billion images are shared daily in social networks alone. Research shows that it would take a person ten years to look at all the photos shared on Snapchat in the last hour! Media buyers and providers experience difficulty organizing relevant content in groups, parsing components of images/videos, and defining the return on investment from generated content in an efficient way. NVIDIA has many customers and ecosystem partners tackling that problem, using NVIDIA DGX as their preferred platform for deep learning (DL) powered image recognition. One of the notable names among the ecosystem is Imagga, a pioneer in offering a deep learning powered image recognition and image processing solution, built on NVIDIA DGX Station, the world's first personal AI supercomputer.


NVIDIA AI scrubs noise and watermarks from digital images

#artificialintelligence

NVIDIA researchers are back with yet another digital image technology that pushes the limits of traditional image manipulation. Unlike Adobe's recently disclosed project, which involved a neural network trained to spot digitally altered images, NVIDIA's newest creation can scrub digital sensor noise and watermarks from digital images. Thanks to the artificial intelligence powering it, the feature is far more effective than existing denoising tools. Digital camera sensor noise, though not as severe as it once was, is still common from consumer-tier cameras, particularly smartphone cameras in low-light conditions. This is due to the small sensor size used in these cameras, making post-processing necessary to increase image quality.


Nvidia Is Using AI to Perfectly Fake Slo-Mo Videos

#artificialintelligence

One of the hardest video effects to fake is slow motion. It requires software to stretch out a clip by creating hundreds of non-existent in-between frames, and the results are often stuttered and unconvincing. But taking advantage of the incredible image-processing potential of deep learning, Nvidia has come up with a way to fake flawless slow motion footage from a standard video clip. It's good thing The Slo-Mo Guys both have day jobs to fall back on. Slowing a video clip from 30 frames per second to 24o frames per second requires the creation of 210 additional frames, or seven in-betweens for every frame originally captured.