How to Convince Your Boss to Trust Your ML/DL Models
Some company managers or stakeholders are pessimistic about machine learning model predictions. Therefore, it is data scientists' reasonability to convince them that the model prediction is credible and also understandable to humans. Therefore, we need to focus not only on creating powerful machine learning/deep learning models, but also make the models interpretable by humans. Interpretability helps in many ways, such as helping us to understand how a model makes a decision, it justifies model prediction and gaining insights, building trust in the model, and it helps us improve the model. There are two types of ML model interpretation -- global and local. Good Examples of inherently explainable models are linear regression and decision trees.
Sep-15-2022, 18:55:11 GMT
- Technology: