Is Explainability In AI Always Necessary?
"AI models do not need to be interpretable to be useful." Interpretability in machine learning goes back to the 1990s when it was neither referred to as "interpretability" nor "explainability". Interpretable and explainable machine learning techniques emerged from the need to design intelligible machine learning systems and understand and explain predictions made by opaque models like deep neural networks. In general, the ML community is yet to agree on a definition for explainability or interpretability. Sometimes it is even called understandability.
Apr-1-2021, 07:30:13 GMT