Collaborating Authors

Nvidia's $99 Jetson Nano Developer Kit brings GPU-supercharged AI smarts to maker projects


Machine learning is coming to the masses, and those hordes of DIY drones and robots are about to get a whole lot smarter. On Monday at Nvidia's GTC conference, the company plans to reveal the $99 Jetson Nano Developer Kit. The kit is an expansion of the company's "Jetson" embedded graphics platform, and it aims to infuse your wildest maker projects with AI that the Raspberry Pi could only dream of. It'll be available immediately online, through distributors, and at GTC itself. The Jetson Nano Developer Kit is a standalone version of the new Jetson Nano AI computer also announced today.

Jetson Nano and Google Coral Edge TPU - a comparison - 3dvisionlabs


Since the topics "Machine Learning" and "Artificial Intelligence" in general are growing bigger and bigger, dedicated AI hardware starts popping up from a number of companies. To get an overview over the current state of AI platforms, we took a closer look at two of them: NVIDIA's Jetson Nano and Google's new Coral USB Accelerator. In this article we will discuss the typical workflow for these platforms and their pros and cons. NVIDIA's Jetson Nano is a single-board computer, which in comparison to something like a RaspberryPi, contains quite a lot CPU/GPU horsepower at a much lower price than the other siblings of the Jetson family. It is currently available as a Developer Kit for around 109€ and contains a System-on-Module (SoM) and a carrier board that provides HDMI, USB 3.0 and Ethernet ports.

DeepEdgeBench: Benchmarking Deep Neural Networks on Edge Devices Artificial Intelligence

EdgeAI (Edge computing based Artificial Intelligence) has been most actively researched for the last few years to handle variety of massively distributed AI applications to meet up the strict latency requirements. Meanwhile, many companies have released edge devices with smaller form factors (low power consumption and limited resources) like the popular Raspberry Pi and Nvidia's Jetson Nano for acting as compute nodes at the edge computing environments. Although the edge devices are limited in terms of computing power and hardware resources, they are powered by accelerators to enhance their performance behavior. Therefore, it is interesting to see how AI-based Deep Neural Networks perform on such devices with limited resources. In this work, we present and compare the performance in terms of inference time and power consumption of the four Systems on a Chip (SoCs): Asus Tinker Edge R, Raspberry Pi 4, Google Coral Dev Board, Nvidia Jetson Nano, and one microcontroller: Arduino Nano 33 BLE, on different deep learning models and frameworks. We also provide a method for measuring power consumption, inference time and accuracy for the devices, which can be easily extended to other devices. Our results showcase that, for Tensorflow based quantized model, the Google Coral Dev Board delivers the best performance, both for inference time and power consumption. For a low fraction of inference computation time, i.e. less than 29.3% of the time for MobileNetV2, the Jetson Nano performs faster than the other devices.

Nvidia Jetson Xavier NX review: Redefining GPU accelerated machine learning


Nvidia launched the Jetson Xavier NX embedded System-on-Module (SoM) at the end of last year. It is pin-compatible with the Jetson Nano SoM and includes a CPU, a GPU, PMICs, DRAM, and flash storage. However, it was missing an important accessory, its own development kit. Since a SoM is an embedded board with just a row of connector pins, it is hard to use out-of-the-box. A development board connects all the pins on the module to ports like HDMI, Ethernet, and USB.

Seeed Studio Grove AI HAT for Raspberry Pi: Artificial, But Not Intelligent


Each successive generation of Raspberry Pi has brought something new to the table. The latest release, the Raspberry Pi 4, is no exception, upgrading the low-cost single-board computer to include true gigabit Ethernet connectivity, a high-performance 64-bit central processor, more powerful graphics processor, and up to 4GB of RAM. It's a low-cost way to play with RISC-V and Kendryte's KPU, but more expensive than an Arduino for microcontroller use and too limited for general-purpose AI work. Even with these impressive-for-the-price specifications, though, there's something the Raspberry Pi can't easily do unaided: deep learning and other artificial intelligence workloads. With an explosion of interest in AI-at-the-edge, though, there's a market for Raspberry Pi add-ons which offer to fill in the gap - and the Grove AI HAT is just such a device, billed by creator Seeed Studio as ideal for AI projects in fields from hobbyist robotics to the medical industry.