Goto

Collaborating Authors

Automating machine learning lifecycle with AWS

#artificialintelligence

Machine Learning and data science life cycle involved several phases. Each phase requires complex tasks executed by different teams, as explained by Microsoft in this article. To solve the complexity of these tasks, cloud providers like Amazon, Microsoft, and Google services automate these tasks that speed up end to end the machine learning lifecycle. This article explains Amazon Web Services (AWS) cloud services used in different tasks in a machine learning life cycle. To better understand each service, I will write a brief description, a use case, and a link to the documentation. In this article, machine learning lifecycle can be replaced with data science lifecycle.


MLOps essentials: four pillars for Machine Learning Operations on AWS

#artificialintelligence

When we approach modern Machine Learning problems in an AWS environment, there is more than traditional data preparation, model training, and final inferences to consider. Also, pure computing power is not the only concern we must deal with in creating an ML solution. There is a substantial difference between creating and testing a Machine Learning model inside a Jupyter Notebook locally and releasing it on a production infrastructure capable of generating business value. The complexities of going live with a Machine Learning workflow in the Cloud are called a deployment gap and we will see together through this article how to tackle it by combining speed and agility in modeling and training with criteria of solidity, scalability, and resilience required by production environments. The procedure we'll dive into is similar to what happened with the DevOps model for "traditional" software development, and the MLOps paradigm, this is how we call it, is commonly proposed as "an end-to-end process to design, create and manage Machine Learning applications in a reproducible, testable and evolutionary way". So as we will guide you through the following paragraphs, we will dive deep into the reasons and principles behind the MLOps paradigm and how it easily relates to the AWS ecosystem and the best practices of the AWS Well-Architected Framework. As said before, Machine Learning workloads can be essentially seen as complex pieces of software, so we can still apply "traditional" software practices.


Intro to MLOps using Amazon SageMaker

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider...


Your guide to AI and ML at AWS re:Invent 2021

#artificialintelligence

Only 9 days until AWS re:Invent 2021, and we're very excited to share some highlights you might enjoy this year. The AI/ML team has been working hard to serve up some amazing content and this year, we have more session types for you to enjoy. Back in person, we now have chalk talks, workshops, builders' sessions, and our traditional breakout sessions. Last year we hosted the first-ever machine learning (ML) keynote, and we are continuing the tradition. We also have more interactive and fun events happening with our AWS DeepRacer League and AWS BugBust Challenge.


Review: AWS AI and Machine Learning stacks up

#artificialintelligence

Amazon Web Services claims to have the broadest and most complete set of machine learning capabilities. I honestly don't know how the company can claim those superlatives with a straight face: Yes, the AWS machine learning offerings are broad and fairly complete and rather impressive, but so are those of Google Cloud and Microsoft Azure. Amazon SageMaker Clarify is the new add-on to the Amazon SageMaker machine learning ecosystem for Responsible AI. SageMaker Clarify integrates with SageMaker at three points: in the new Data Wrangler to detect data biases at import time, such as imbalanced classes in the training set, in the Experiments tab of SageMaker Studio to detect biases in the model after training and to explain the importance of features, and in the SageMaker Model Monitor, to detect bias shifts in a deployed model over time. Historically, AWS has presented its services as cloud-only.