Goto

Collaborating Authors

 azure kubernetes service


A Guide to MLOps in Production – Towards AI

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Countless hours of organized effort are required to bring a model to the production stage. The efforts which were spent on all the previous steps would turn out to be fruitful only if the model is successfully deployed.


Advantages of Deploying Machine Learning models with Kubernetes

#artificialintelligence

A Machine Learning Data scientist works hard to build a model. The model helps to solve a business problem. However, when it comes to deploying the model, there are challenges like how to scale the model, how the model can interact with different services within or outside the application, how to achieve repetitive operations, etc. To overcome this problem, Kubernetes is the best fit. In this blog, I will help you understand the basics of Kubernetes, its benefits for the deployment of Machine Learning (ML) models, and how to actually do the deployment using the Azure Kubernetes service.


Deploying AI Models in Azure Cloud

#artificialintelligence

Based on the project experiences working on AI (Artificial Intelligence) & ML (Machine Learning) projects with AML (Azure Machine Learning) platform since 2018 in this article we will share a point of view (the good parts) on bringing your AI models to production in Azure Cloud via MLOps. It is a typical situation when the initial experimentation (a trial and error approach) and the associated feasibility study to produce a certain model takes place in AML/Jupyter Notebook(s) first. Once some promising results have been obtained, analyzed and validated via the feasibility study by the Machine Learning Engineering team "locally", the Application Engineering and DevOps Engineering teams can collaborate to "productionalize" the workload at scale in the Cloud (and/or at the Edge as needed). AML (Azure Machine Learning) is an MLOps-enabled Azure's end-to-end Machine Learning platform for building and deploying models in Azure Cloud. Please find more information about Azure Machine Learning (ML as a Service) here, and more holistically on Microsoft's AI/ML reference architectures and best practices here. AML Python SDK is great for development and automation, AML CLI is more convenient for Dev(Sec)Ops/MLOps automation, and AML REST API is a lower level interface which is typically not used on projects in favor of the former 2 approaches.


ONNX Runtime on Azure Kubernetes Service -- ONNX Runtime 1.9.99 documentation

#artificialintelligence

Throughout this tutorial, we will be referring to ONNX, a neural network exchange format used to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools (CNTK, PyTorch, Caffe, MXNet, TensorFlow) and choose the combination that is best for them. ONNX is developed and supported by a community of partners including Microsoft AI, Facebook, and Amazon. For more information, explore the ONNX website and open source files. ONNX Runtime is the runtime engine that enables evaluation of trained machine learning (traditional ML and Deep Learning) models with high performance and low resource utilization.


New in Stream Analytics: Machine Learning, online scaling, custom code, and more

#artificialintelligence

Azure Stream Analytics is a fully managed Platform as a Service (PaaS) that supports thousands of mission-critical customer applications powered by real-time insights. Out-of-the-box integration with numerous other Azure services enables developers and data engineers to build high-performance, hot-path data pipelines within minutes. The key tenets of Stream Analytics include Ease of use, Developer productivity, and Enterprise readiness. Today, we're announcing several new features that further enhance these key tenets. Let's take a closer look at these features: Rollout of these preview features begins November 4th, 2019.


New in Stream Analytics: Machine Learning, online scaling, custom code, and more

#artificialintelligence

Azure Stream Analytics is a fully managed Platform as a Service (PaaS) that supports thousands of mission-critical customer applications powered by real-time insights. Out-of-the-box integration with numerous other Azure services enables developers and data engineers to build high-performance, hot-path data pipelines within minutes. The key tenets of Stream Analytics include Ease of use, Developer productivity, and Enterprise readiness. Today, we're announcing several new features that further enhance these key tenets. Let's take a closer look at these features: Rollout of these preview features begins November 4th, 2019.


Azure Machine Learning concepts - an Introduction

#artificialintelligence

This blog relates to the forthcoming book. If you don't have an Azure subscription, you can use the free or paid version of Azure Machine Learning service. It is important to delete resources after use. Azure Machine Learning service is a cloud service that you use to train, deploy, automate, and manage machine learning models – availing all the benefits of a Cloud deployment. In terms of development, you can use Jupyter notebooks and the Azure SDKs. You can see more details of the Azure Machine Learning Python SDK which allow you to choose environments like Scikit-learn, Tensorflow, PyTorch, and MXNet.


Democratizing Machine Learning at Scale with Kubernetes on Azure - The New Stack

#artificialintelligence

Microsoft sponsored this article, which was written by The New Stack. Containers and the Kubernetes orchestrator are emerging as core technologies for delivering distributed training at the scale that effective machine learning requires to be reliable in production use cases. Massive labeled datasets and the tools to work with them have driven the rise of machine learning from obscure research technique to mainstream technology. Years of gradual improvements in algorithms have been bundled into frameworks like TensorFlow and PyTorch, and using the massive parallelism of GPUs speeds up the process of training those algorithms. Containers have their own tradeoffs but with the right tools and cloud scale, they're showing real promise in scale-out architectures to support distributed machine learning environments.