Tutorial: Edge AI with Triton Inference Server, Kubernetes, Jetson Mate

#artificialintelligence 

In this tutorial, we will configure and deploy Nvidia Triton Inference Server on the Jetson Mate carrier board to perform inference of computer vision models. It builds on our previous post where I introduced Jetson Mate from Seeed Studio to run the Kubernetes cluster at the edge. Though this tutorial focuses on Jetson Mate, you can use one or more Jetson Nano Developer Kits connected to a network switch to run the Kubernetes cluster. Assuming you have installed and configured JetPack 4.6.x on all the four Jetson Nano 4GB modules, let's start with the installation of K3s. The first step is to turn Nvidia Container Toolkit into the default runtime for Docker.