Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

#artificialintelligence 

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead Rudin et al., arXiv 2019 It's pretty clear from the title alone what Cynthia Rudin would like us to do! The paper is a mix of technical and philosophical arguments and comes with two main takeaways for me: firstly, a sharpening of my understanding of the difference between explainability and interpretability, and why the former may be problematic; and secondly some great pointers to techniques for creating truly interpretable models. A model can be a black box for one of two reasons: (a) the function that the model computes is far too complicated for any human to comprehend, or (b) the model may in actual fact be simple, but its details are proprietary and not available for inspection. In explainable ML we make predictions using a complicated black box model (e.g., a DNN), and use a second (posthoc) model created to explain what the first model is doing. A classic example here is LIME, which explores a local area of a complex model to uncover decision boundaries.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilarity
None found