Not enough data to create a plot.
Try a different view from the menu above.
Mehdiyev, Nijat
Quantifying and Explaining Machine Learning Uncertainty in Predictive Process Monitoring: An Operations Research Perspective
Mehdiyev, Nijat, Majlatow, Maxim, Fettke, Peter
In today's highly competitive and complex business environment, organizations are under constant pressure to optimize their performance and decision-making processes. According to Herbert Simon, enhancing organizational performance relies on effectively channeling finite human attention towards critical data for decision-making, necessitating the integration of information systems (IS), artificial intelligence (AI) and operations research (OR) insights [1]. Recent OR research provides evidence in support of this proposition, as the discipline has witnessed a transformation due to the abundant availability of rich and voluminous data from various sources coupled with advances in machine learning [2]. As of late, heightened academic attention has been devoted to prescriptive analytics, a discipline that suggests combining the results of predictive analytics with optimization techniques in a probabilistic framework to generate responsive, automated, restricted, time-sensitive, and ideal decisions [3]. The confluence of AI and OR is evident due to their interdependent and complementary nature, as both disciplines strive to augment decision-making processes through computational and mathematical methodologies [4].
Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring
Mehdiyev, Nijat, Majlatow, Maxim, Fettke, Peter
As data-driven intelligent systems advance, the need for reliable and transparent decision-making mechanisms has become increasingly important. Therefore, it is essential to integrate uncertainty quantification and model explainability approaches to foster trustworthy business and operational process analytics. This study explores how model uncertainty can be effectively communicated in global and local post-hoc explanation approaches, such as Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE) plots. In addition, this study examines appropriate visualization analytics approaches to facilitate such methodological integration. By combining these two research directions, decision-makers can not only justify the plausibility of explanation-driven actionable insights but also validate their reliability. Finally, the study includes expert interviews to assess the suitability of the proposed approach and designed interface for a real-world predictive process monitoring problem in the manufacturing domain.
Local Post-Hoc Explanations for Predictive Process Monitoring in Manufacturing
Mehdiyev, Nijat, Fettke, Peter
This study proposes an innovative explainable process prediction solution to facilitate the data-driven decision making for process planning in manufacturing. After integrating the top-floor and shop-floor data obtained from various enterprise information systems especially from Manufacturing Execution Systems, a deep neural network was applied to predict the process outcomes. Since we aim to operationalize the delivered predictive insights by embedding them into decision making processes, it is essential to generate the relevant explanations for domain experts. To this end, two local post-hoc explanation approaches, Shapley Values and Individual Conditional Expectation (ICE) plots, are applied which are expected to enhance the decision-making capabilities by enabling experts to examine explanations from different perspectives. After assessing the predictive strength of the adopted deep neural networks with relevant binary classification evaluation measures, a discussion of the generated explanations is provided. Lastly, a brief discussion of ongoing activities in the scope of current emerging application and some aspects of future implementation plan concludes the study.
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Mehdiyev, Nijat, Fettke, Peter
The contemporary process-aware information systems possess the capabilities to record the activities generated during the process execution. To leverage these process specific fine-granular data, process mining has recently emerged as a promising research discipline. As an important branch of process mining, predictive business process management, pursues the objective to generate forward-looking, predictive insights to shape business processes. In this study, we propose a conceptual framework sought to establish and promote understanding of decision-making environment, underlying business processes and nature of the user characteristics for developing explainable business process prediction solutions. Consequently, with regard to the theoretical and practical implications of the framework, this study proposes a novel local post-hoc explanation approach for a deep learning classifier that is expected to facilitate the domain experts in justifying the model decisions. In contrary to alternative popular perturbation-based local explanation approaches, this study defines the local regions from the validation dataset by using the intermediate latent space representations learned by the deep neural networks. To validate the applicability of the proposed explanation method, the real-life process log data delivered by the Volvo IT Belgium's incident management system are used. The adopted deep learning classifier achieves a good performance with the Area Under the ROC Curve of 0.94. The generated local explanations are also visualized and presented with relevant evaluation measures that are expected to increase the users' trust in the black-box-model.