Train and Deploy TensorFlow Models Optimized for Google Edge TPU - The New Stack
Edge computing devices are becoming the logical destination to run deep learning models. While the public cloud is the preferred environment for training, it is the edge that runs the models for inferencing. Since most of the edge devices have constraints in the form of available CPU and GPU resources, there are purpose-built AI chips designed to accelerate the inferencing. These AI accelerators complement the CPU by speeding up the calculations involved in inferencing. They are designed to optimize the forward propagation of neural networks deployed on the edge.
Sep-8-2019, 09:41:55 GMT
- Technology: