Goto

Collaborating Authors

#013 TF TensorFlow Lite Master Data Science 29.02.2020

#artificialintelligence

Highlights: In this post we are going to show how to build a computer vision model and prepare it for deploying on mobile and embedded devices. Last time, we showed how we can improve a model performance using transfer learning. But why would we only use our model to predict images of cats or dogs on our computer when we can use it on our smartphone or other embedded device. TensorFlow Lite is TensorFlow's lightweight solution for mobile and embedded devices. It allows us to run machine learning models on mobile devices with low latency, quickly without need for accessing the server.


TensorFlow Lite machine learning on Arduino MKR WIFI 1010

#artificialintelligence

This example demonstrates the training, conversion and deployment of a simple TensorFlow model to Arduino: https://t.co/wdRW8G8s0n As it says, it's a quick demo of the training, conversion and deployment of a simple TensorFlow model to an Arduino MKR device. Of course, full declaration, this is the baby step of baby steps. The programme is outputting a sine-wave. "This example is designed to demonstrate the absolute basics of using TensorFlow Lite for Microcontrollers. It includes the full end-to-end workflow of training a model, converting it for use with TensorFlow Lite, and running inference on a microcontroller. The sample is built around a model trained to replicate a sine function. It contains implementations for several platforms. In each case, the model is used to generate a pattern of data that is used to either blink LEDs or control an animation."


How TensorFlow Lite Optimizes Neural Networks for Mobile Machine Learning

#artificialintelligence

The steady rise of mobile Internet traffic has provoked a parallel increase in demand for on-device intelligence capabilities. However, the inherent scarcity of resources at the Edge means that satisfying this demand will require creative solutions to old problems. How do you run computationally expensive operations on a device that has limited processing capability without it turning into magma in your hand? The addition of TensorFlow Lite to the TensorFlow ecosystem provides us with the next step forward in machine learning capabilities, allowing us to harness the power of TensorFlow models on mobile and embedded devices while maintaining low latency, efficient runtimes, and accurate inference. TensorFlow Lite provides the framework for a trained TensorFlow model to be compressed and deployed to a mobile or embedded application.


Introduction to TensorFlow Lite TensorFlow

#artificialintelligence

TensorFlow Lite is TensorFlow's lightweight solution for mobile and embedded devices. It enables on-device machine learning inference with low latency and a small binary size. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. TensorFlow Lite uses many techniques for achieving low latency such as optimizing the kernels for mobile apps, pre-fused activations, and quantized kernels that allow smaller and faster (fixed-point math) models. Most of our TensorFlow Lite documentation is on Github for the time being.


Applying Machine Learning on Mobile Devices

#artificialintelligence

In the modern world, machine learning is used in various fields: image classification, consumer demand forecasts, film and music recommendations for particular people, clustering. At the same time, for fairly large models, the result computation (and to a much greater degree the training of the model) can be a resource-intensive operation. In order to use the trained models on devices other than the most powerful ones, Google introduced its TensorFlow Lite framework. To work with it, you need to train a model built using the TensorFlow framework (not Lite!) and then convert it to the TensorFlow Lite format. After that, the model can be easily used on embedded or mobile devices.