Developing Explainable Machine Learning Model using Augmented Concept Activation Vector
Hassanpour, Reza, Oztoprak, Kasim, Netten, Niels, Busker, Tony, Bargh, Mortaza S., Choenni, Sunil, Kizildag, Beyza, Kilinc, Leyla Sena
–arXiv.org Artificial Intelligence
Machine learning models use high dimensional feature spaces to map their inputs to the corresponding class labels. However, these features often do not have a one-to-one correspondence with physical concepts understandable by humans, which hinders the ability to provide a meaningful explanation for the decisions made by these models. We propose a method for measuring the correlation between high-level concepts and the decisions made by a machine learning model. Our method can isolate the impact of a given high-level concept and accurately measure it quantitatively. Additionally, this study aims to determine the prevalence of frequent patterns in machine learning models, which often occur in imbalanced datasets. We have successfully applied the proposed method to fundus images and managed to quantitatively measure the impact of radiomic patterns on the model decisions.
arXiv.org Artificial Intelligence
Dec-26-2024
- Country:
- Asia
- India (0.04)
- Middle East > Republic of Türkiye
- Ankara Province > Ankara (0.04)
- Konya Province > Konya (0.04)
- Europe
- Netherlands > South Holland
- Rotterdam (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- Netherlands > South Holland
- North America > United States
- New York (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.47)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (1.00)
- Therapeutic Area (1.00)
- Health & Medicine
- Technology: