Goto

Collaborating Authors

 cohort explanation


Implet: A Post-hoc Subsequence Explainer for Time Series Models

Meng, Fanyu, Kan, Ziwen, Rezaei, Shahbaz, Kong, Zhaodan, Chen, Xin, Liu, Xin

arXiv.org Artificial Intelligence

--Explainability in time series models is crucial for fostering trust, facilitating debugging, and ensuring interpretabil-ity in real-world applications. In this work, we introduce Im-plet, a novel post-hoc explainer that generates accurate and concise subsequence-level explanations for time series models. Our approach identifies critical temporal segments that significantly contribute to the model's predictions, providing enhanced interpretability beyond traditional feature-attribution methods. Based on it, we propose a cohort-based (group-level) explanation framework designed to further improve the conciseness and interpretability of our explanations. We evaluate Implet on several standard time-series classification benchmarks, demonstrating its effectiveness in improving interpretability. Deep learning models have demonstrated remarkable success in various time series forecasting and classification tasks, often surpassing traditional statistical methods. Despite their effectiveness, these models are frequently considered black-boxes, making their predictions challenging to interpret. Understanding which temporal patterns or subsequences contribute significantly to a model's decisions is crucial for building trust, debugging erroneous predictions, and making informed decisions, especially in high-stakes domains such as finance, healthcare, and climate science. Existing explainability methods for time series predominantly rely on feature attribution techniques, such as gradient-based saliency maps or perturbation-based approaches. While these methods provide valuable insights into individual time points or features influencing model predictions, their high dimensionality can complicate interpretation. In comparison, subsequence-based explanations, such as shapelet-based methods, are a type of global explanation that offer more intuitive insights by identifying discriminative temporal patterns within time series data.


CohEx: A Generalized Framework for Cohort Explanation

Meng, Fanyu, Liu, Xin, Kong, Zhaodan, Chen, Xin

arXiv.org Artificial Intelligence

eXplainable Artificial Intelligence (XAI) has garnered significant attention for enhancing transparency and trust in machine learning models. However, the scopes of most existing explanation techniques focus either on offering a holistic view of the explainee model (global explanation) or on individual instances (local explanation), while the middle ground, i.e., cohort-based explanation, is less explored. Cohort explanations offer insights into the explainee's behavior on a specific group or cohort of instances, enabling a deeper understanding of model decisions within a defined context. In this paper, we discuss the unique challenges and opportunities associated with measuring cohort explanations, define their desired properties, and create a generalized framework for generating cohort explanations based on supervised clustering.


Interpreting Inflammation Prediction Model via Tag-based Cohort Explanation

Meng, Fanyu, Larke, Jules, Liu, Xin, Kong, Zhaodan, Chen, Xin, Lemay, Danielle, Tagkopoulos, Ilias

arXiv.org Artificial Intelligence

One significant application is in nutrition science, where ML models can provide dietary recommendations, detect food quality and safety issues during production, and surveil public health and epidemiology. However, the complex and often opaque nature of these models presents challenges in understanding and trusting their predictions. To address these issues, explainability techniques have garnered considerable interest, aiming to make ML models more interpretable and transparent. Explainability can be approached from different perspectives, including local explanations that focus on individual predictions and global explanations that provide insights into the overall behavior of the model. However, there is a growing need for intermediate-level explanations that balance these two extremes, offering contextually relevant insights that are both comprehensive and specific (Sokol and Flach, 2020; Arrieta et al., 2020; Adadi and Berrada, 2018). Cohort explainability, also referred to as subgroup explainability, explains model predictions by analyzing groups of instances with shared characteristics and emerges as a promising solution to this challenge.