There is (rightfully) quite a bit of emphasis on testing and optimizing models pre-deployment in the machine learning ecosystem, with meta machine learning platforms like Comet becoming a standard part of the data science stack. There has been less of an emphasis, however, on testing and optimizing models post-deployment, at least as far as tooling is concerned. This dearth of tooling has forced many to build extra in-house infrastructure, adding yet another bottleneck to deploying to production. We've spent a lot of time thinking about A/B testing deployed models in Cortex, our open source ML deployment platform. After several iterations, we've built a set of features that make it easy to conduct scalable, automated A/B tests of deployed models.
Apr-8-2021, 14:07:00 GMT