Interpreting Inflammation Prediction Model via Tag-based Cohort Explanation
Meng, Fanyu, Larke, Jules, Liu, Xin, Kong, Zhaodan, Chen, Xin, Lemay, Danielle, Tagkopoulos, Ilias
–arXiv.org Artificial Intelligence
One significant application is in nutrition science, where ML models can provide dietary recommendations, detect food quality and safety issues during production, and surveil public health and epidemiology. However, the complex and often opaque nature of these models presents challenges in understanding and trusting their predictions. To address these issues, explainability techniques have garnered considerable interest, aiming to make ML models more interpretable and transparent. Explainability can be approached from different perspectives, including local explanations that focus on individual predictions and global explanations that provide insights into the overall behavior of the model. However, there is a growing need for intermediate-level explanations that balance these two extremes, offering contextually relevant insights that are both comprehensive and specific (Sokol and Flach, 2020; Arrieta et al., 2020; Adadi and Berrada, 2018). Cohort explainability, also referred to as subgroup explainability, explains model predictions by analyzing groups of instances with shared characteristics and emerges as a promising solution to this challenge.
arXiv.org Artificial Intelligence
Oct-17-2024
- Country:
- North America > United States
- California > Yolo County
- Davis (0.14)
- Georgia > Fulton County
- Atlanta (0.04)
- California > Yolo County
- North America > United States
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Technology: