TensorFlow Lite is TensorFlow's lightweight solution for mobile and embedded devices. It enables on-device machine learning inference with low latency and a small binary size. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. TensorFlow Lite uses many techniques for achieving low latency such as optimizing the kernels for mobile apps, pre-fused activations, and quantized kernels that allow smaller and faster (fixed-point math) models. Most of our TensorFlow Lite documentation is on Github for the time being.
TensorFlow Lite for machine learning on mobile devices was first announced by Dave Burke, VP of engineering of Android at the Google I/O 2017. TensorFlow Lite is a lightweight version of Google's TensorFlow open source library that is mainly used for machine learning application by researchers and developers.
TensorFlow is an open source software library for numerical computation using data-flow graphs. It was originally developed by the Google Brain Team within Google's Machine Intelligence research organization for machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well. It reached version 1.0 in February 2017, and has continued rapid development, with 21,000 commits thus far, many from outside contributors. This article introduces TensorFlow, its open source community and ecosystem, and highlights some interesting TensorFlow open sourced models. It runs on nearly everything: GPUs and CPUs--including mobile and embedded platforms--and even tensor processing units (TPUs), which are specialized hardware to do tensor math on.
These new devices are made by Coral, Google's new platform for enabling embedded developers to build amazing experiences with local AI. Coral's first products are powered by Google's Edge TPU chip, and are purpose-built to run TensorFlow Lite, TensorFlow's lightweight solution for mobile and embedded devices. As a developer, you can use Coral devices to explore and prototype new applications for on-device machine learning inference. Coral's Dev Board is a single-board Linux computer with a removable System-On-Module (SOM) hosting the Edge TPU. It allows you to prototype applications and then scale to production by including the SOM in your own devices.
Deep Learning has made several breakthroughs in recent years. Compared to traditional computation platforms, it has become more sophisticated and advanced than ever. Smart homes, intelligent personal assistant, etc. are some of the major breakthroughs in the present era. In this article, we list down 8 platforms which can be used to build mobile deep learning solutions. Facebook's open-source deep learning framework, Caffe2 is a lightweight, modular, and scalable framework which provides an easy way to experiment with deep learning models and algorithms.