Explainable Artificial Intelligence for Dependent Features: Additive Effects of Collinearity
Explainable Artificial Intelligence (XAI) emerged to reveal the internal mechanism of machine learning models and how the features affect the prediction outcome. Collinearity is one of the big issues that XAI methods face when identifying the most informative features in the model. Current XAI approaches assume the features in the models are independent and calculate the effect of each feature toward model prediction independently from the rest of the features. However, such assumption is not realistic in real life applications. We propose an Additive Effects of Collinearity (AEC) as a novel XAI method that aim to considers the collinearity issue when it models the effect of each feature in the model on the outcome. AEC is based on the idea of dividing multivariate models into several univariate models in order to examine their impact on each other and consequently on the outcome. The proposed method is implemented using simulated and real data to validate its efficiency comparing with the a state of arts XAI method. The results indicate that AEC is more robust and stable against the impact of collinearity when it explains AI models compared with the state of arts XAI method.
Oct-30-2024
- Country:
- Asia > Middle East
- Iraq > Kurdistan Region > Duhok Governorate > Zakho (0.04)
- Europe > United Kingdom
- England
- Greater London > London (0.15)
- Leicestershire > Leicester (0.05)
- England
- North America
- Puerto Rico (0.04)
- United States
- California > San Francisco County
- San Francisco (0.14)
- New York > New York County
- New York City (0.04)
- California > San Francisco County
- Asia > Middle East
- Genre:
- Research Report (0.85)
- Industry:
- Health & Medicine > Therapeutic Area (0.71)
- Technology: