Goto

Collaborating Authors

Results


Fotokite Sigma Fully Autonomous Drone is Powered by NVIDIA's Jetson Edge AI Platform

#artificialintelligence

When it comes to emergencies, first responders don't have much time to think, whether it be a fire or search-and-rescue mission, and that's why Switzerland-based Fotokite has developed a fully autonomous tethered drone. Called Sigma, it was built using the NVIDIA Jetson platform and specializes in covering the vast majority of situations where first responders need an aerial perspective during an emergency. Not just a standard autonomous drone, this one comes equipped with a thermal camera capable of determining where a fire is, as well as where the safest location to enter or exit a structure would be. The drone can automatically highlight hotspots that need attention and guiding firefighters on where water is needed most, even through heavy smoke or with limited visibility. Everything from autonomous flight and real-time data delivery to the user interface and real-time streaming is made as simple as pushing a button, which means first responders can focus on saving lives and keeping people safe.


Nvidia's latest AI tech translates text into landscape images

#artificialintelligence

Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don't exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings. "Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images," Isha Salian, a member of Nvidia's corporate communications team, wrote in a blog post. "Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky."


La veille de la cybersécurité

#artificialintelligence

Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don't exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings. "Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images," Isha Salian, a member of Nvidia's corporate communications team, wrote in a blog post. "Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky."


Lenovo's Google-powered Smart Clock drops to $39 at Walmart

Engadget

If you're waiting for Amazon Prime Day to kick off tomorrow, you may want to take advantage of the deals that other retailers already have going on. Walmart has already kicked off its own "anti-Prime Day" savings event and with it comes the best price we've seen on the Lenovo Smart Clock. Right now, Walmart has the smart alarm clock for $39, which is $1 cheaper than its previous low and 50 percent off its normal price. This little gadget has gotten quite popular since its release last year. We gave it a score of 87 for its charming design, ambient light sensor, sunrise alarm feature and lack of camera.


Accelerating Medical Image Segmentation with NVIDIA Tensor Cores and TensorFlow 2 NVIDIA Developer Blog

#artificialintelligence

Medical image segmentation is a hot topic in the deep learning community. Proof of that is the number of challenges, competitions, and research projects being conducted in this area, which only rises year over year. Among all the different approaches to this problem, U-Net has become the backbone of many of the top-performing solutions for both 2D and 3D segmentation tasks. This is due to its simplicity, versatility, and effectiveness. When practitioners are confronted with a new segmentation task, the first step commonly is to use an existent implementation of U-Net as a backbone.


Nvidia's Clara to help hospitals with radiology AI at the edge

#artificialintelligence

Nvidia unveiled a new federated learning edge computing reference application for radiology to help hospitals crunch medical data for better disease detection while protecting patient privacy. Called Clara Federal Learning, the system relies on Nvidia EGX, a computing platform which was announced earlier in 2019. It uses the Jetson Nano low wattage computer which can provide up to one-half trillion operations per second of processing for tasks like image recognition. EGX allows low-latency artificial intelligence at the edge to act on data, in this case images from MRIs, CT scans and more. Nvidia made its announcement of Clara on Sunday at the Radiological Society of North America conference in Chicago.


NVIDIA's new AI lets you recreate your pet's smile on a lion

#artificialintelligence

NVIDIA, the company behind some of the most impressive graphics cards, has pulled off yet another machine learning-powered wizardry. Researchers from the Santa Clara-based chipmaker have created a new AI tool -- dubbed Ganimal -- that can take in a picture of an animal and recreate its facial expression and pose on the face of any other creature. In a paper -- titled "Few-Shot Unsupervised Image-to-Image Translation" aka FUNIT -- the image-to-image translation method leverages generative adversarial networks (GANs), a neural network that has been widely adopted in a variety of image generation and transfer scenarios. You can give the tool a spin right here and read the technical aspects of the research here. "In this case, we train a network to jointly solve many translation tasks where each task is about translating a random source animal to a random target animal by leveraging a few example images of the target animal," Ming-Yu Liu, the lead computer vision researcher behind FUNIT, said.


Seagate Transforms Manufacturing with Deep Learning from Edge to Cloud

#artificialintelligence

Sign in to report inappropriate content. "This video is about NVIDIA, HPE Edgeline and Apollo systems that help factories in leveraging AI (Artificial Intelligence) to identify real-time data patterns that people might miss.


Project14 Vision Thing: Build Things Using Graphics, AI, Computer Vision, & Beyond!

#artificialintelligence

Enter Your Project for a chance to win an Oscilloscope Grand Prize Package for the Most Creative Vision Thing Project! There's a lot of variety with how you choose to implement your project. It's a great opportunity to do something creative that stretches the imagination of what hardware can do. Your project can be either a vision based project involving anything that is related to Computer Vision and Machine Learning, Camera Vision and AI based projects, Deep Learning, using hardware such as the Nvidia Jetson Nano, Pi with Intel Compute Stick, Edge TPU, etc. as vimarsh_ and aabhas suggested. Or, it can be a graphics project involving something graphical such as adding a graphical display to a microcontroller, image processing on a microcontroller, image recognition interface a camera to a microcontroller, or FPGA - camera interfacing/image processing/graphical display as dougw suggested.


OpenEI: An Open Framework for Edge Intelligence

arXiv.org Artificial Intelligence

In the last five years, edge computing has attracted tremendous attention from industry and academia due to its promise to reduce latency, save bandwidth, improve availability, and protect data privacy to keep data secure. At the same time, we have witnessed the proliferation of AI algorithms and models which accelerate the successful deployment of intelligence mainly in cloud services. These two trends, combined together, have created a new horizon: Edge Intelligence (EI). The development of EI requires much attention from both the computer systems research community and the AI community to meet these demands. However, existing computing techniques used in the cloud are not applicable to edge computing directly due to the diversity of computing sources and the distribution of data sources. We envision that there missing a framework that can be rapidly deployed on edge and enable edge AI capabilities. To address this challenge, in this paper we first present the definition and a systematic review of EI. Then, we introduce an Open Framework for Edge Intelligence (OpenEI), which is a lightweight software platform to equip edges with intelligent processing and data sharing capability. We analyze four fundamental EI techniques which are used to build OpenEI and identify several open problems based on potential research directions. Finally, four typical application scenarios enabled by OpenEI are presented.