Sound Explanation for Trustworthy Machine Learning
Jia, Kai, Saowakon, Pasapol, Appelbaum, Limor, Rinard, Martin
–arXiv.org Artificial Intelligence
We take a formal approach to the explainability problem of machine learning systems. We argue against the practice of interpreting black-box models via attributing scores to input components due to inherently conflicting goals of attribution-based interpretation. We prove that no attribution algorithm satisfies specificity, additivity, completeness, and baseline invariance. We then formalize the concept, sound explanation, that has been informally adopted in prior work. A sound explanation entails providing sufficient information to causally explain the predictions made by a system. Finally, we present the application of feature selection as a sound explanation for cancer prediction models to cultivate trust among clinicians.
arXiv.org Artificial Intelligence
Jun-8-2023
- Country:
- Asia > Middle East
- Israel (0.04)
- Europe > Switzerland
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine
- Consumer Health (0.94)
- Health Care Providers & Services (0.93)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area
- Cardiology/Vascular Diseases (0.95)
- Endocrinology (1.00)
- Gastroenterology (1.00)
- Hepatology (1.00)
- Immunology (1.00)
- Infections and Infectious Diseases (1.00)
- Neurology (0.93)
- Oncology (1.00)
- Health & Medicine
- Technology: