Interpretable classifiers for tabular data via discretization and feature selection
Jaakkola, Reijo, Janhunen, Tomi, Kuusisto, Antti, Rankooh, Masood Feyzbakhsh, Vilander, Miikka
–arXiv.org Artificial Intelligence
Explainability and human interpretability are becoming an increasingly important part of research on machine learning. In addition to the immediate benefits of explanations and interpretability in scientific contexts, the capacity to provide explanations behind automated decisions has already been widely addressed also on the level of legislation. For example, the European General Data Protection Regulation [8] and California Consumer Privacy Act [4] both refer to the right of individuals to get explanations of automated decisions concerning them. This article investigates interpretability in the framework of tabular data. Tabular data is highly important for numerous scientific and real-life contexts, often even regarded as the most important form of data: see, e.g., [22, 2]. The aim of the current article is to introduce an efficient method for extracting highly interpretable binary classifiers from tabular data. While explainable AI (or XAI) methods custom-made for pictures and text cannot be readily used in the setting of tabular data [16], numerous succesful XAI methods for tabular data exist. See the survey [20] for an overview of XAI in relation to tabular data. The authors are given in the alphabetical order.
arXiv.org Artificial Intelligence
Feb-8-2024
- Country:
- Europe (0.93)
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Law (1.00)