Collaborating Authors

Machine Learning at the Edge with AWS Outposts and Amazon SageMaker


As customers continue to come up with new use-cases for machine learning, data gravity is as important as ever. Where latency and network connectivity is not an issue, generating data in one location (such as a manufacturing facility) and sending it to the cloud for inference is acceptable for some use-cases. With other critical use-cases, such as fraud detection for financial transactions, product quality in manufacturing, or analyzing video surveillance in real-time, customers are faced with the challenges that come with having to move that data to the cloud first. One of the challenges customers are facing with performing inference in the cloud is the lack of real-time inference and/or security requirements preventing user data to be sent or stored in the cloud. Tens of thousands of customers use Amazon SageMaker to accelerate their Machine Learning (ML) journey by helping data scientists and developers to prepare, build, train, and deploy machine learning models quickly.

XGBoost in Amazon SageMaker


SageMaker is Amazon Web Services' (AWS) machine learning platform that works in the cloud. It is fully-managed and allows one to perform an entire data science workflow on the platform. And in this post, I will show you how to call your data from AWS S3, upload your data into S3 and bypassing local storage, train a model, deploy an endpoint, perform predictions, and perform hyperparameter tuning. The data cleaning and feature engineering code are derived from this blog post, which is written by Andrew Long, who gave full permission to use his code. The dataset can be found here.

Optimizing the price-performance ratio of a Serverless Inference Service with Amazon SageMaker


I have recently published a step-by-step guide to serverless model deployments with Amazon SageMaker Pipelines, Amazon API Gateway, and AWS Lambda. With AWS Lambda, you pay only for what you use. Lambda charges based on the number of requests, execution duration, and amount of memory allocated to the function. So how much memory should you allocate to your inference function? In this post, I will show how you can use SageMaker Hyperparameter Tuning (HPO) jobs and a load-testing tool to automatically optimize the price/performance ratio of your serverless inference service.

Amazon announces price cuts on GPU instances in AWS Sagemaker


Amazon Web Services is cutting the price of GPU instances on Sagemaker, its fully managed machine learning service. AWS said customers will see up to 18% in price reductions on all ml.p2 and ml.p3 GPU instances. The price cuts will apply from October 1 for all SageMaker components across the following regions: US East (N. AWS also announced Wednesday that it's launching an interactive training series for the first time ever on its Twitch streaming platform. The training series will offer free skills training for the AWS Certified Cloud Practitioner certification, which providers people with in-demand cloud skills and continues to be one of the top-paying cloud certifications for job seekers.

AWS Announces Nine New Amazon SageMaker Capabilities


Distributed Training on Amazon SageMaker delivers new capabilities that can train large models up to two times faster than would otherwise be possible with today's machine learning processors Inc. company, announced nine new capabilities for its industry-leading machine learning service, Amazon SageMaker, making it even easier for developers to automate and scale all steps of the end-to-end machine learning workflow. Today's announcements bring together powerful new capabilities like faster data preparation, a purpose-built repository for prepared data, workflow automation, greater transparency into training data to mitigate bias and explain predictions, distributed training capabilities to train large models up to two times faster, and model monitoring on edge devices. Machine learning is becoming more mainstream, but it is still evolving at a rapid clip. With all the attention machine learning has received, it seems like it should be simple to create machine learning models, but it isn't. In order to create a model, developers need to start with the highly manual process of preparing the data.