model package
ML inferencing at the edge with Amazon SageMaker Edge and Ambarella CV25
Ambarella builds computer vision SoCs (system on chips) based on a very efficient AI chip architecture and CVflow that provides the Deep Neural Network (DNN) processing required for edge inferencing use cases like intelligent home monitoring and smart surveillance cameras. Developers convert models trained with frameworks (such as TensorFlow or MXNET) to Ambarella CVflow format to be able to run these models on edge devices. Amazon SageMaker Edge has integrated the Ambarella toolchain into its workflow, allowing you to easily convert and optimize your models for the platform. In this post, we show how to set up model optimization and conversion with SageMaker Edge, add the model to your edge application, and deploy and test your new model in an Ambarella CV25 device to build a smart surveillance camera application running on the edge. Smart security cameras have use case-specific machine learning (ML) enabled features like detecting vehicles and animals, or identifying possible suspicious behavior, parking, or zone violations.
Serving a Machine Learning Model Via REST API
For us to build our API, we are going to leverage the Flask micro-framework. Due to various reasons such as being extremely powerful, simple to use, and very good documentation, Flask is a popular choice for microservices in Python -- See Documentation. "Micro" does not mean that your whole web application has to fit into a single Python file (although it certainly can), nor does it mean that Flask is lacking in functionality. The "micro" in microframework means Flask aims to keep the core simple but extensible. Let's first take a look at our directory structure… The topmost directory is ml_api and within this directory there is requirements.txt
Adding AI to your applications with ready-to-use models from AWS Marketplace Amazon Web Services
Machine learning (ML) lets enterprises unlock the true potential of their data, automate decisions, and transform their business processes to deliver exponential value to their customers. To help you take advantage of ML, Amazon SageMaker provides the ability to build, train, and deploy ML models quickly. Until recently, if you used Amazon SageMaker, you could either choose optimized algorithms offered in Amazon SageMaker or bring your own algorithms and models. AWS Marketplace for Machine Learning increases the selection of ML algorithms and models. In this post, you learn how to deploy and perform inference on the Face Anonymizer model package from the AWS Marketplace for Machine Learning.
- Retail > Online (0.41)
- Information Technology > Services (0.41)