amazon sagemaker inference recommender
AWS updates databases, AI and serverless offerings
In a follow-up to new compute, network and data service offerings announced by Amazon Web Services (AWS) CEO Adam Selipsky, AWS vice president of AI, Swami Sivasubramanian, pulled the covers off some updates to database, machine learning and serverless offerings. Taking a cue from Selipsky's theme of simplifying AWS' array of services in order to make them easier to consume for developers and enterprises, Sivasubramanian announced three new updates to AWS' plethora of database offerings. They include a new managed database service for business applications that allows developers and enterprises to customise the underlying database and operating system; a new table class for Amazon DynamoDB designed to reduce storage costs for infrequently accessed data; and a service that uses machine learning to better diagnose and remediate database-related performance issues. The new managed database service, Amazon RDS (Relational Database Service) Custom, is aimed at customers whose applications require customisation at the database level and thus are responsible for administrative tasks such as provisioning, database setup, patching and backups that take up a lot of time, Sivasubramanian said. Amazon RDS Custom automates these administrative processes while allowing customisation to the database and underlying operating system these applications require, Sivasubramanian said.
Announcing Amazon SageMaker Inference Recommender
Today, we're pleased to announce Amazon SageMaker Inference Recommender -- a brand-new Amazon SageMaker Studio capability that automates load testing and optimizes model performance across machine learning (ML) instances. Ultimately, it reduces the time it takes to get ML models from development to production and optimizes the costs associated with their operation. Until now, no service has provided MLOps Engineers with a means to pick the optimal ML instances for their model. To optimize costs and maximize instance utilization, MLOps engineers would have to use their experience and intuition to select an ML instance type that would serve them and their model well, given the requirements to run them. Moreover, given the vast array of ML instances available, and the practically infinite nuances of each model, choosing the right instance type could take more than a few attempts to get it right.