Interpretable Neural Networks based classifiers for categorical inputs
Zamuner, Stefano, Rios, Paolo De Los
The increasing and ubiquitous use of machine learning (ML) algorithms in many technological [1], financial [2, 3] and medical applications [4] calls for an improved understanding of their inner working that is, calls for more interpretable algorithms. Indeed difficulties in understanding how neural networks operate constitute a major problem in sensitive applications such as self-driving vehicles or medical diagnosis, where errors from the machine could result in otherwise avoidable accidents and human losses. Actually, the impossibility to fully grasp the decision process undertaken by the network not only prevents humans from being able to supervise such decision and eventually correct it, but also hinders our ability to use these algorithms to better understand the problem under scrutiny, and to inspire new improved methods and approaches for solving it. Thus, the development and deployment of interpretable neural networks could represent an important step to improve the user trust and consequently to foster the adoption of Artificial Intelligence systems in common, everyday tasks [5, 6].
Feb-5-2021
- Country:
- Asia > Middle East
- Oman > Muscat Governorate > Muscat (0.04)
- Europe > Switzerland
- North America > United States
- New York > New York County > New York City (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Industry:
- Technology: