Goto

Collaborating Authors

Unpacking the Complexity of Machine Learning Deployments

#artificialintelligence

Deploying and maintaining Machine Learning models at scale is one of the most pressing challenges faced by organizations today. Machine Learning workflow which includes Training, Building and Deploying machine learning models can be a long process with many roadblocks along the way. Many data science projects don't make it to production because of challenges that slow down or halt the entire process. To overcome the challenges of model deployment, we need to identify the problems and learn what causes them. End-to-end ML applications often comprise of components written in different programming languages.


How Machine Learning Pipelines Work and What Needs Improving - The New Stack

#artificialintelligence

This article is a post in a series on bringing continuous integration and deployment (CI/CD) practices to machine learning. Check back to The New Stack for future installments. The pipeline runs from ingesting and cleaning data, through feature engineering and model selection in an interactive workbench environment, to training and experiments, usually with the option to share results, to deploying the trained model, to serving results like predictions and classifications. The machine learning development and deployment pipelines are often separate, but unless the model is static, it will need to be retrained on new data or updated as the world changes, and updated and versioned in production, which means going through several steps of the pipeline again and again. Managing the complexity of these pipelines is getting harder, especially when you're trying to use real-time data and update models frequently.


The ABC of DevOps Implementation With Containerization and Docker - DZone DevOps

#artificialintelligence

DevOps is all the rage in the IT industry. From Wikipedia, DevOps is a set of practices that combines software development (Dev) and information-technology operations (Ops), which aims to shorten the systems development life cycle and provide continuous delivery with high software quality. The primary reason for the popularity of DevOps is that it allows enterprises to develop and improve products at a quicker pace than traditional software development methods. As our ever-changing work environment is becoming more fast-paced, the demand for faster delivery and fixes in the software development market is on the rise. Thus, the need for the production of high-quality output in a short period with limited post-production errors gave birth to DevOps.


The ABC of Containerization - DevOps Implementation

#artificialintelligence

As we have discussed previously on our blog the importance of switching to a DevOps way of software development, we now shift the conversation to containerization, which is a popular technology that is increasingly being used to make the implementation of DevOps smoother and easier. As we know, DevOps is a cultural practice of bringing together the'development' and the'operation' verticals so that both the teams work collaboratively instead of in siloes, whereas containerization is a technology that makes it easier to follow the DevOps practice. But what exactly is containerization? Containerization is the process of packaging an application along with its required libraries, frameworks, and configuration files together so that it can be run in various computing environments efficiently. In simpler terms, containerization is the encapsulation of an application and its required environment.


How to Deploy Machine Learning Models: The Ultimate Guide

#artificialintelligence

The deployment of machine learning models is the process for making your models available in production environments, where they can provide predictions to other software systems. It is only once models are deployed to production that they start adding value, making deployment a crucial step. However, there is complexity in the deployment of machine learning models. This post aims to at the very least make you aware of where this complexity comes from, and I'm also hoping it will provide you with useful tools and heuristics to combat this complexity. If it's code, step-by-step tutorials and example projects you are looking for, you might be interested in the Udemy Course "Deployment of Machine Learning Models".