Unlocking Business Value from Machine Learning: Model Interpretability

#artificialintelligence 

For the same reason that star players make bad coaches, models that make complicated decisions at high levels of abstraction come at a price, they can't easily explain their reasoning. This is a direct (and sometimes expensive) tradeoff. There is a similar paradigm for machine learning. The more powerful the model, the harder it is to interpret its inner workings. Sure, you may get a more accurate answer from a neural network, but how it arrived at that answer may be a total mystery. This can be a problem when trying to figure out what went wrong or how to improve it.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found