Interpretable Neural Networks based classifiers for categorical inputs

Zamuner, Stefano, Rios, Paolo De Los

arXiv.org Machine Learning 

The increasing and ubiquitous use of machine learning (ML) algorithms in many technological [1], financial [2, 3] and medical applications [4] calls for an improved understanding of their inner working that is, calls for more interpretable algorithms. Indeed difficulties in understanding how neural networks operate constitute a major problem in sensitive applications such as self-driving vehicles or medical diagnosis, where errors from the machine could result in otherwise avoidable accidents and human losses. Actually, the impossibility to fully grasp the decision process undertaken by the network not only prevents humans from being able to supervise such decision and eventually correct it, but also hinders our ability to use these algorithms to better understand the problem under scrutiny, and to inspire new improved methods and approaches for solving it. Thus, the development and deployment of interpretable neural networks could represent an important step to improve the user trust and consequently to foster the adoption of Artificial Intelligence systems in common, everyday tasks [5, 6].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found