A Parable Of Explainability


Ok, sure, machine learning is great for making predictions; But you can't use it to replace scientific theory. Not only will it fail to reach generalizable conclusions, but the result is going to lack elegance and explainability. We won't be able to understand it or build upon it! What makes an algorithm or theory or agent explainable? It's certainly not the ability o "look inside"; We're rather happy assuming that block boxes, such as brains, are capable of explaining their conclusions and theories. We scoff at the idea that perfectly transparent neural networks are "explainable" in a meaningful sense; So it's not visibility that makes something explainable.

Duplicate Docs Excel Report

None found

Similar Docs  Excel Report  more

None found