Black Box Model Explanations and the Human Interpretability Expectations -- An Analysis in the Context of Homicide Prediction
Ribeiro, José, Carneiro, Níkolas, Alves, Ronnie
–arXiv.org Artificial Intelligence
Strategies based on Explainable Artificial Intelligence - XAI have promoted better human interpretability of the results of black box machine learning models. This sets a precedent for questioning whether or not human expectations are being met when faced with the explanations of this type of model. The XAI measures being currently used (Ciu, Dalex, Eli5, Lofo, Shap, and Skater) provide various forms of explanations, including global rankings of relevance of attributes, which allow for an overview of how the model is explained as a result of its inputs and outputs. These measures provide for an increase in the explainability of the model and a greater interpretability grounded on the context of the problem. Current research points to the need for further studies (within a specific context/problem) on how these explanations meet the Interpretability Expectations of human experts and how they can be used to make the model even more transparent while taking into account specific complexities of the model and dataset being analyzed, as well as important human factors of sensitive real-world contexts/problems. Intending to shed light on the explanations generated by XAI measures and their interpretabilities, this research addresses a real-world classification problem related to homicide prediction, duly endorsed by the scientific community, replicated its proposed black box model and used 6 different XAI measures to generate explanations and 6 different human experts to generate what this research referred to as Interpretability Expectations - IE. The results were computed by means of comparative analysis and identification of relationships among all the attribute ranks produced, and 49% concordance was found among attributes indicated by means of XAI measures and human experts, 41% exclusively by XAI measures and 10% exclusively by human experts. The results allow for answering questions such as: "Do the different XAI measures generate similar explanations for the proposed problem?", "Are the interpretability expectations generated among different human experts similar?","Do the
arXiv.org Artificial Intelligence
Oct-19-2022
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.14)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- New York > New York County
- New York City (0.04)
- California > San Francisco County
- South America
- Brazil
- Pará > Belém (0.14)
- Rio de Janeiro > Rio de Janeiro (0.04)
- Chile > Santiago Metropolitan Region
- Santiago Province > Santiago (0.04)
- Brazil
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Technology: