Machine Leaning Explainability in Practice

#artificialintelligence 

Does it happen a lot with you when you have an awesome ML model for a particular use case but stake holders question your ML model in terms of transparency? In this digital world of so many anti-trust litigations and billion of dollars of penalties for breaking the regulation, a number of times stake holders are hesitant to release best solution (complex Machine Learning (ML)or Deep Learning(DL) models) instead go with rule based or linear models with easier interpretability. Is there a way to get best of both worlds? ML explainability or interpretability can help you release the best solution with reasonable explanation for the prediction. For a long time, ML models were considered as black boxes because it was almost impossible to explain what happened to the data between the input and the output.