Interpretability in Machine Learning
Should we always trust a model that performs well? A model could reject your application for a mortgage or diagnose you with cancer. The consequences of these decisions are serious and, even if they are correct, we would expect an explanation. A human would be able to tell you that your income is too low for a mortgage or that a specific cluster of cells is likely malignant. A model that provided similar explanations would be more useful than one that just provided predictions. By obtaining these explanations, we say we are interpreting a machine learning model.
Jul-12-2022, 17:13:29 GMT
- Country:
- Africa > South Africa (0.05)
- Genre:
- Research Report (0.34)
- Industry:
- Health & Medicine > Therapeutic Area > Oncology (0.92)
- Technology: