Goto

Collaborating Authors

Deploying ML models using SageMaker Serverless Inference (Preview)

#artificialintelligence

Amazon SageMaker Serverless Inference (Preview) was recently announced at re:Invent 2021 as a new model hosting feature that lets customers serve model predictions without having to explicitly provision compute instances or configure scaling policies to handle traffic variations. Serverless Inference is a new deployment capability that complements SageMaker's existing options for deployment that include: SageMaker Real-Time Inference for workloads with low latency requirements in the order of milliseconds, SageMaker Batch Transform to run predictions on batches of data, and SageMaker Asynchronous Inference for inferences with large payload sizes or requiring long processing times. Serverless Inference means that you don't need to configure and manage the underlying infrastructure hosting your models. When you host your model on a Serverless Inference endpoint, simply select the memory and max concurrent invocations. Then, SageMaker will automatically provision, scale, and terminate compute capacity based on the inference request volume.


SageMaker Serverless Inference illustrates Amazon's philosophy for ML workloads

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Amazon just unveiled Serverless Inference, a new option for SageMaker, its fully managed machine learning (ML) service. The goal for Amazon SageMaker Serverless Inference is to serve use cases with intermittent or infrequent traffic patterns, lowering total cost of ownership (TCO) and making the service easier to use. VentureBeat connected with Bratin Saha, AWS VP of Machine Learning, to discuss where Amazon SageMaker Serverless fits into the big picture of Amazon's machine learning offering and how it affects ease of use and TCO, as well as Amazon's philosophy and process in developing its machine learning portfolio. Inference is the productive phase of ML-powered applications.


Deploying your ML models to AWS SageMaker

#artificialintelligence

We faced some difficulties with Streamlit.io You can see our SageMaker implementation here. The purpose of this article is to provide a tutorial with examples showing how to deploy ML models to AWS SageMaker. This tutorial covers only deploying ML models that are not trained in SageMaker. It is more complicated to deploy your ML models that are trained outside of AWS SageMaker than training the models and deploy end-to-end within SageMaker.


Identify paraphrased text with Hugging Face on Amazon SageMaker

#artificialintelligence

Identifying paraphrased text has business value in many use cases. For example, by identifying sentence paraphrases, a text summarization system could remove redundant information. Another application is to identify plagiarized documents. In this post, we fine-tune a Hugging Face transformer on Amazon SageMaker to identify paraphrased sentence pairs in a few steps. A truly robust model can identify paraphrased text when the language used may be completely different, and also identify differences when the language used has high lexical overlap.


Build a CI/CD pipeline for deploying custom machine learning models using AWS services

#artificialintelligence

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the ML process to make it easier to develop high-quality ML artifacts. AWS Serverless Application Model (AWS SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, event source mappings, steps in AWS Step Functions, and more. A workflow includes data collection, training, testing, human evaluation of the ML model, and deployment of the models for inference.