At WWDC 2017 Apple announced ways it uses machine learning, and ways for developers to add machine learning to their own applications. Their machine learning API, called Core ML, allows developers to integrate machine learning models into apps running on Apple devices with iOS, macOS, watchOS, and tvOS. Models reside on the device itself, so data never leaves the device. Multiple API calls are already available that application developers can use without having to add any additional models to their app. Examples of such computer vision algorithms are face detection and tracking, landmark detection, and event detection.
Google recently introduced ML KIT, a machine-learning module fully integrated in its Firebase mobile development platform and available for both iOS and Android. With this new Firebase module, Google simplifies the creation of machine-learning powered applications on mobile phones and solves some of the challenges of implementing computationally intense features on mobile devices. ML Kit allows mobile developers to create machine-learning features based on some of the models available in its deep-learning Vision API such as image labeling, OCR and face detection. ML Kit is available both for Android and iOS applications directly within the Firebase platform alongside other Google Cloud based modules such as authentication and storage. ML Kit aims at solving several of the challenges specific to mobile devices which are raised by the computationally intensive operations required for artificial intelligence.
Deep Learning has made several breakthroughs in recent years. Compared to traditional computation platforms, it has become more sophisticated and advanced than ever. Smart homes, intelligent personal assistant, etc. are some of the major breakthroughs in the present era. In this article, we list down 8 platforms which can be used to build mobile deep learning solutions. Facebook's open-source deep learning framework, Caffe2 is a lightweight, modular, and scalable framework which provides an easy way to experiment with deep learning models and algorithms.
At WWDC Apple released Core ML 2: a new version of their machine learning SDK for iOS devices. The new release of Core ML, whose first version was released in June 2017, should create an inference time speedup of 30% for apps developed using Core ML 2. They achieve this using two techniques call "batch prediction" and "quantization". Batch prediction refers to the practice of predicting for multiple inputs at the same time (e.g. Quantization is the practice of representing weights and activation in fewer bits during inference than during training. During training, you can use floating-point numbers used for weights and activations, but they slow down computation a lot during inference on non-GPU devices.
Matt Winkler delivered a talk at Microsoft Build 2018 explaining what is new in Azure Machine Learning. The Azure Machine Learning platform is built from the hardware level up. It is open to whatever tools and frameworks of your choice. If it runs on Python, you can do it within the tools and frameworks. Services come in three flavors: conversational, pre-trained, and custom AI.