Interpretability in Machine Learning: on the Interplay with Explainability, Predictive Performances and Models
Leblanc, Benjamin, Germain, Pascal
–arXiv.org Artificial Intelligence
In some areas such as the medical field, ML-assisted predictions or decisions can drastically impact human life. For example, breast cancer [131] can be devastating if not diagnosed in time (or at all). The use of black-box predictors in these crucial cases has deceived more than once: a classical example of which is the use of the COMPAS system by the USA judiciary system for predicting criminal recidivism [133]. Other cases where fairness has been jeopardized by the use of black-boxes are numerous: job and loan applications biased toward men [40]; mortgage-approval biased toward white applicants [122]; higher credit card limits for men [172]; etc. With time, it became clear that interpretability is crucial when it comes to understanding how a predictor behaves and thus preventing unfortunate events; as pointed out by Goodman and Flaxman [70]: "If we do not know how ML [predictors] work, we cannot check or regulate them to ensure that they do not encode discrimination against minorities [...], we will not be able to learn from instances in which it is mistaken."
arXiv.org Artificial Intelligence
Nov-19-2023
- Country:
- Asia > Middle East
- Yemen > Amanat Al Asimah > Sanaa (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America
- Canada (0.04)
- United States
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- Virginia (0.04)
- New Mexico > Bernalillo County
- Asia > Middle East
- Genre:
- Overview (1.00)
- Research Report (0.64)
- Industry:
- Banking & Finance (1.00)
- Health & Medicine > Therapeutic Area
- Oncology (0.34)
- Information Technology > Security & Privacy (0.93)
- Technology: