Understanding Practitioners Perspectives on Monitoring Machine Learning Systems

Naveed, Hira, Grundy, John, Arora, Chetan, Khalajzadeh, Hourieh, Haggag, Omar

arXiv.org Artificial Intelligence 

--Given the inherent non-deterministic nature of machine learning (ML) systems, their behavior in production environments can lead to unforeseen and potentially dangerous outcomes. For a timely detection of unwanted behavior and to prevent organizations from financial and reputational damage, monitoring these systems is essential. This paper explores the strategies, challenges, and improvement opportunities for monitoring ML systems from the practitioners' perspective. We conducted a global survey of 91 ML practitioners to collect diverse insights into current monitoring practices for ML systems. We aim to complement existing research through our qualitative and quantitative analyses, focusing on prevalent runtime issues, industrial monitoring and mitigation practices, key challenges, and desired enhancements in future monitoring tools. Our findings reveal that practitioners frequently struggle with runtime issues related to declining model performance, exceeding latency, and security violations. While most prefer automated monitoring for its increased efficiency, many still rely on manual approaches due to the complexity or lack of appropriate automation solutions. Practitioners report that the initial setup and configuration of monitoring tools is often complicated and challenging, particularly when integrating with ML systems and setting alert thresholds. Moreover, practitioners find that monitoring adds extra workload, strains resources, and causes alert fatigue. The desired improvements from the practitioners' perspective are: automated generation and deployment of monitors, improved support for performance and fairness monitoring, and recommendations for resolving runtime issues. These insights offer valuable guidance for the future development of ML monitoring tools that are better aligned with practitioners' needs. Machine Learning (ML) systems are being increasingly employed across various domains, including social media, e-commerce, and engineering - even critical domains such as finance, healthcare, and autonomous vehicles nowadays leverage ML to automate and enhance their services. Generative AI and Large Language Models (LLMs) have further boosted ML adoption by creating several new use cases [1], [2]. A typical ML system lifecycle begins by gathering requirements and preparing data, which is followed by the development of the ML component (experimentation, model training, and evaluation) and other traditional software components [3]. After development, the next step is integration and system testing. Once quality assurance is completed, the ML system is deployed to a production environment.