Bea Stollnitz - Creating batch endpoints in Azure ML
Suppose you've trained a machine learning model to accomplish some task, and you'd now like to provide that model's inference capabilities as a service. Maybe you're writing an application of your own that will rely on this service, or perhaps you want to make the service available to others. This is the purpose of endpoints -- they provide a simple web-based API for feeding data to your model and getting back inference results. Azure ML currently supports three types of endpoints: batch endpoints, Kubernetes online endpoints, and managed online endpoints. I'm going to focus on batch endpoints in this post, but let me start by explaining how the three types differ. Batch endpoints are designed to handle large requests, working asynchronously and generating results that are held in blob storage.
Aug-29-2022, 21:03:12 GMT
- Technology: