The Challenge of Imputation in Explainable Artificial Intelligence Models
Ahmad, Muhammad Aurangzeb, Eckert, Carly, Teredesai, Ankur
–arXiv.org Artificial Intelligence
Even though the field of Artificial Intelligence is more than sixty years old, it is only in the last decade or so that AI systems are being increasingly interwoven into the fabric of the socio-technical apparatus of the society and are thus having a massive impact on society. This increasing incorporation of AI has led to increased calls for accountability and regulation of AI systems [8]. Model explanations are considered to be one of the most important ways to provide accountability of AI systems. The model explanations, however, can only be as good as the data on which the algorithms are based. This is where the issue of missing and imputed data becomes pivotal for model explanations. In some domains like healthcare, almost all datasets have missing values [6]. As many applications of AI in healthcare are patient-oriented, decisions that are informed by AI and ML models can potentially have significant clinical consequences.
arXiv.org Artificial Intelligence
Jul-29-2019
- Country:
- North America > United States > Washington > King County > Seattle (0.04)
- Genre:
- Research Report (0.50)
- Industry:
- Technology: