Serve Your ML Models in AWS Using Python

#artificialintelligence 

Automate your ML model train-deploy cycle, garbage collection, and rollbacks, all from Python with an open-source PyPi package based on Cortex. It all started with modernization of a product categorization project. The goal was to replace complex low-level Docker commands with a very simple and user-friendly deployment utility called Cortex. The solution in the form of a Python package proved to be re-usable since we successfully used it as part of our recommendation engine project. We plan to deploy all ML projects like this. Since GLAMI relies heavily on open-source software, we wanted to contribute back and decided to open-source the package, calling it Cortex Serving Client.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found