Goto

Collaborating Authors

 Castillo, David


Model Interpretation and Explainability: Towards Creating Transparency in Prediction Models

arXiv.org Artificial Intelligence

Model explainability and interpretability are now Explainable AI (XAI) has a counterpart in analytical being perceived as desirable, if not required, features modeling which we refer to as model explainability. of data science and predictive analytics overall. Our We tackle the issue of model explainability in the objective here is to examine what these features may context of prediction models. We analyze a dataset of look like when applied to previous research we have loans from a credit card company using the following conducted in the area of econometric prediction and three steps: execute and compare four different predictive analytics [10]. We consider the domain of prediction methods, apply the best known Lending Club loan applications. For our dataset, we explainability techniques in the current literature to perform three different analyses: the model training sets to identify feature importance 1. Model Execution and Comparison. Run and (FI) (static case), and finally to cross-check whether compare four different prediction models on the the FI set holds up under "what if" prediction


Unified Explanations in Machine Learning Models: A Perturbation Approach

arXiv.org Artificial Intelligence

This communication problem has migrated into the A high-velocity paradigm shift towards Explainable arena of machine learning (ML) and artificial Artificial Intelligence (XAI) has emerged in recent intelligence (AI) in recent times, giving rise to the need years. Highly complex Machine Learning (ML) models for and subsequent emergence of Explainable AI have flourished in many tasks of intelligence, and the (XAI). XAI has arisen from growing discontent with questions have started to shift away from traditional "black box" models, often in the form of neural metrics of validity towards something deeper: What is networks and other emergent, dynamic models (e.g., this model telling me about my data, and how is it agent-based simulation, genetic algorithms) that arriving at these conclusions? Inconsistencies between generate outcomes lacking in transparency. This has XAI and modeling techniques can have the undesirable also been studied through the lens of general machine effect of casting doubt upon the efficacy of these learning, where classic methods also face an explainability approaches. To address these problems, interpretability crisis for high dimensional inputs [1].