deploying machine learning model
ML2SC: Deploying Machine Learning Models as Smart Contracts on the Blockchain
Li, Zhikai, Vott, Steve, Krishnamachar, Bhaskar
With the growing concern of AI safety, there is a need to trust the computations done by machine learning (ML) models. Blockchain technology, known for recording data and running computations transparently and in a tamper-proof manner, can offer this trust. One significant challenge in deploying ML Classifiers on-chain is that while ML models are typically written in Python using an ML library such as Pytorch, smart contracts deployed on EVM-compatible blockchains are written in Solidity. We introduce Machine Learning to Smart Contract (ML2SC), a PyTorch to Solidity translator that can automatically translate multi-layer perceptron (MLP) models written in Pytorch to Solidity smart contract versions. ML2SC uses a fixed-point math library to approximate floating-point computation. After deploying the generated smart contract, we can train our models off-chain using PyTorch and then further transfer the acquired weights and biases to the smart contract using a function call. Finally, the model inference can also be done with a function call providing the input. We mathematically model the gas costs associated with deploying, updating model parameters, and running inference on these models on-chain, showing that the gas costs increase linearly in various parameters associated with an MLP. We present empirical results matching our modeling. We also evaluate the classification accuracy showing that the outputs obtained by our transparent on-chain implementation are identical to the original off-chain implementation with Pytorch.
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Economy (1.00)
Deploying Machine Learning Models to Ahead-of-Time Runtime on Edge Using MicroTVM
Liu, Chen, Jobst, Matthias, Guo, Liyuan, Shi, Xinyue, Partzsch, Johannes, Mayr, Christian
In the past few years, more and more AI applications have been applied to edge devices. However, models trained by data scientists with machine learning frameworks, such as PyTorch or TensorFlow, can not be seamlessly executed on edge. In this paper, we develop an end-to-end code generator parsing a pre-trained model to C source libraries for the backend using MicroTVM, a machine learning compiler framework extension addressing inference on bare metal devices. An analysis shows that specific compute-intensive operators can be easily offloaded to the dedicated accelerator with a Universal Modular Accelerator (UMA) interface, while others are processed in the CPU cores. By using the automatically generated ahead-of-time C runtime, we conduct a hand gesture recognition experiment on an ARM Cortex M4F core.
- Europe > Germany > Saxony > Dresden (0.05)
- North America > United States > New York > New York County > New York City (0.04)
Deploying Machine Learning Models: A Checklist
In The Checklist Manifesto, Atul Gawande shows how using checklists can make everyone's work more efficient and less error-prone. If they are useful for aircraft pilots and surgeons, we could use them to help us with deploying machine learning models as well. While most of those steps might sound obvious, it's easy to forget them, or leave them for "somebody" to do "later". In many cases, skipping those steps will sooner or later lead to problems, hence it's good to have them as a checklist. For more details, you can check resources like Introducing MLOps book by Mark Treveil et al, Building Machine Learning Powered Applications book by Emmanuel Ameisen, the free Full Stack Deep Learning course, Rules of Machine Learning document by Martin Zinkevich, ML Ops: Operationalizing Data Science report by David Sweenor et al, the Responsible Machine Learning report by Patrick Hall et al, the Continuous Delivery for Machine Learning article by Danilo Sato et al, Machine Learning Systems Design page by Chip Huyen, and the ml-ops.org
- Education (0.51)
- Transportation > Air (0.36)
Deploying Machine Learning Models with Heroku
For starters, deployment is the process of integrating a trained machine learning model into a production environment, usually intended to serve an end-user. Deployment is typically the last stage in the development lifecycle of a machine learning product. The "Model Deployment" stage above consists of a series of steps which are shown in the image below: For the purpose of this tutorial, I will use Flask to build the web application. In this section, let's train the machine learning model we intend to deploy. For simplicity and to not divert from the primary objective of this post, I will deploy a linear regression model.
TensorFlow 2 Pocket Reference: Building and Deploying Machine Learning Models: Tung, KC: 9781492089186: Amazon.com: Books
The TensorFlow ecosystem has evolved into many different frameworks to serve a variety of roles and functions. That flexibility is part of the reason for its widespread adoption, but it also complicates the learning curve for data scientists, machine learning (ML) engineers, and other technical stakeholders. There are so many ways to manage TensorFlow models for common tasks--such as data and feature engineering, data ingestions, model selection, training patterns, cross validation against overfitting, and deployment strategies--that the choices can be overwhelming. This pocket reference will help you make choices about how to do your work with TensorFlow, including how to set up common data science and ML workflows using TensorFlow 2.0 design patterns in Python. Examples describe and demonstrate TensorFlow coding patterns and other tasks you are likely to encounter frequently in the course of your ML project work.
Advantages of Deploying Machine Learning models with Kubernetes
A Machine Learning Data scientist works hard to build a model. The model helps to solve a business problem. However, when it comes to deploying the model, there are challenges like how to scale the model, how the model can interact with different services within or outside the application, how to achieve repetitive operations, etc. To overcome this problem, Kubernetes is the best fit. In this blog, I will help you understand the basics of Kubernetes, its benefits for the deployment of Machine Learning (ML) models, and how to actually do the deployment using the Azure Kubernetes service.
7 Lessons I've Learnt From Deploying Machine Learning Models Using ONNX
In this post, we will outline key learnings from a real-world example of running inference on a sci-kit learn model using the ONNX Runtime API in an AWS Lambda function. This is not a tutorial but rather a guide focusing on useful tips, points to consider, and quirks that may save you some head-scratching! The Open Neural Network Exchange (ONNX) format is a bit like dipping your french fries into a milkshake; it shouldn't work but it just does. ONNX allows us to build a model using all the training frameworks we know and love like PyTorch and TensorFlow and package it up in a format supported by many hardware architectures and operating systems. The ONNX Runtime is a simple API that is cross-platform and provides optimal performance to run inference on an ONNX model exactly where you need them: the cloud, mobile, an IoT device, you name it!
Deploying Machine Learning models with TensorFlow Serving -- an introduction
This post covers all steps required to start serving Machine Learning models as web services with TensorFlow Serving, a flexible and high-performance serving system¹. In this example, we will setup a virtual environment in which we will generate synthetic data for a regression problem, train multiple models and finally deploy them as web services, accessing predictions from REST APIs. The only prerequisite for this tutorial is a working machine with Python² and Docker Engine³ installed. We will finally use curl⁴ to write API calls and consume the Machine Learning models through their prediction endpoints. A virtual environment is a self-consistent Python environment that can be created to manage and segregate projects: it provides isolation so that the dependencies do not affect other packages on the same operating system.
Considerations for Deploying Machine Learning Models in Production
A common grumble among data science or machine learning researchers or practitioners is that putting a model in production is difficult. As a result, some claim that a large percentage, 87%, of models never see the light of the day in production. "I have a model, I spent considerable time developing it on my laptop. How do I get it into our production environment? What should I consider for my ML stack and tooling?"
Considerations for Deploying Machine Learning Models in Production
A common grumble among data science or machine learning researchers or practitioners is that putting a model in production is difficult. As a result, some claim that a large percentage, 87%, of models never see the light of the day in production. "I have a model, I spent considerable time developing it on my laptop. How do I get it into our production environment? What should I consider for my ML stack and tooling?"