Towards a Shapley Value Graph Framework for Medical peer-influence
Duell, Jamie, Seisenberger, Monika, Aarts, Gert, Zhou, Shangming, Fan, Xiuyi
–arXiv.org Artificial Intelligence
Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research with a variety of techniques and libraries coming to fruition in recent years, e.g., model agnostic explanations [1, 2], counter-factual explanations [3, 4], contrastive explanations [5] and argumentation-based explanations [6, 7]. XAI methods are ubiquitous across fields of Machine Learning (ML), where the trust factor associated with applied ML is undermined due to the black-box nature of methods. Generally speaking, a ML model takes a set of inputs (features) and predicts some output; and existing works on XAI predominantly focus on understanding relations between features and output. These approaches in XAI are successful in many areas as they suggest how an output of a model might change, should we change its inputs. Thus, interventions - manipulating inputs in specific ways with the hope of reaching some desired outcome - can be provoked using existing XAI methods when they are capable of providing relatively accurate explanations [8, 9]. However, with existing XAI holding little knowledge to consequences of interventions [10], such intervention could be susceptible to error. From both a business and ethical stand-point, we must reach beyond understanding relations between features and their outputs; we also need to understand the influence that features have on one another. We believe such knowledge holds the key to deeper understanding of model behaviours and identification of suitable interventions.
arXiv.org Artificial Intelligence
Dec-29-2021
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.50)
- Industry:
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Technology: