Interpretable Machine Learning: An Overview – Becoming Human: Artificial Intelligence Magazine
Despite their high predictive performance, many machine learning techniques remain black boxes because it is difficult to understand the role of each feature and how it combines with others to produce a prediction. However, users need to understand and trust the decisions made by machine learning models, especially in sensitive fields such as medicine. For this reason, there is an increasing need of methods able to explain the individual predictions of a model, that is, a way to understand what features made the model give its prediction for a specific instance. We have a neural network (the machine learning model) trained as an image classifier. This model would give a probability (let's say 0.98) that a cat appears in the picture (an observation), so we could say that "our model predicts that this is a cat with a probability of 0.98".
Jan-11-2019, 00:54:06 GMT
- Technology: