As machine learning and AI propagate in software products and services, we need to establish best practices and tools to test, deploy, manage, and monitor ML models in real-world production. In short, with MLOps we strive to avoid "technical debt" in machine learning applications. SIG MLOps defines "an optimal MLOps experience [as] one where Machine Learning assets are treated consistently with all other software assets within a CI/CD environment. Machine Learning models can be deployed alongside the services that wrap them and the services that consume them as part of a unified release process." By codifying these practices, we hope to accelerate the adoption of ML/AI in software systems and fast delivery of intelligent software.
Artificial Intelligence (AI) and machine learning (ML) technologies extend the capabilities of software applications that are now found throughout our daily life: digital assistants, facial recognition, photo captioning, banking services, and product recommendations. The difficult part about integrating AI or ML into an application is not the technology, or the math, or the science or the algorithms. The challenge is getting the model deployed into a production environment and keeping it operational and supportable. Software development teams know how to deliver business applications and cloud services. AI/ML teams know how to develop models that can transform a business.
Google researchers found that "data cascades -- compounding events causing negative, downstream effects from data issues -- triggered by conventional AI/ML practices that undervalue data quality… are pervasive (92% prevalence), invisible, delayed, but often avoidable." Lets discuss on the trend being followed widely for most or all of the AI use cases across the organizations. Just to make myself clear the term AI over here in being referred as an umbrella encompassing our DataScience/Machine Learning and Deep Learning Use cases. The Two basic components of all AI systems are Data and Model, both go hand in hand in producing desired results. We do realize that the AI community has been biased towards putting more effort in the model building One plausible reason is that AI industry closely follows academic research in AI.
Somewhere around 2018, enterprise organizations started experimenting with Machine Learning (ML) features to build add-ons to their pre-existing solutions, or to create brand new solutions for their clients. If you threw in a few sci-fi-sounding ML features in your offer, you could attract more clients who were interested in trying out the newest tech. In current MLOps trends, the narrative has changed almost completely. Every year, Artificial Intelligence (AI) sees exponential advancements compared to technology from any other era. The field is evolving extremely quickly, and people are more aware of its limitations and opportunities. Overall, we can say that AI/ML solutions are becoming equivalent to regular software solutions in terms of how companies use them, so it's no surprise that they need a well-planned framework for production just like DevOps. This well-planned framework is MLOps.
The perpetual penetration of new-age technology is demanding a need for DevOps intelligence in the entire software development lifecycle. From development to delivery, product companies have transitioned their approach. Traditional waterfall has been replaced by agile, DevOps is superseded by DevSecOps. However, it is worth noting that the roles served by Agile and DevOps are complementary. By combining the collective efforts of Agile and DevOps to incorporate CI/CD, product companies are ensuring regular software updates throughout the year rather than having just one major release.