ONNX Runtime: a one-stop shop for machine learning inferencing
Organizations that want to leverage AI at scale must overcome a number of challenges around model training and model inferencing. Today, there are a plethora of tools and frameworks that accelerate model training but inferencing remains a tough nut due to the variety of environments that models need to run in. For example, the same AI model might need be inferenced on cloud GPUs as well as desktop CPUs and even edge devices. Optimizing a single model for so many different environments takes time, let alone hundreds or thousands of models. In this blog post, we'll show you how Microsoft tackled this challenge internally and how you can leverage the latest version of the same technology.
Sep-9-2019, 14:47:34 GMT
- Technology: