Optimal Interpretability-Performance Trade-off of Classification Trees with Black-Box Reinforcement Learning
Kohler, Hector, Akrour, Riad, Preux, Philippe
–arXiv.org Artificial Intelligence
Interpretability of AI models allows for user safety checks to build trust in these models. In particular, decision trees (DTs) provide a global view on the learned model and clearly outlines the role of the features that are critical to classify a given data. However, interpretability is hindered if the DT is too large. To learn compact trees, a Reinforcement Learning (RL) framework has been recently proposed to explore the space of DTs. A given supervised classification task is modeled as a Markov decision problem (MDP) and then augmented with additional actions that gather information about the features, equivalent to building a DT. By appropriately penalizing these actions, the RL agent learns to optimally trade-off size and performance of a DT. However, to do so, this RL agent has to solve a partially observable MDP. The main contribution of this paper is to prove that it is sufficient to solve a fully observable problem to learn a DT optimizing the interpretability-performance trade-off. As such any planning or RL algorithm can be used. We demonstrate the effectiveness of this approach on a set of classical supervised classification datasets and compare our approach with other interpretability-performance optimizing methods.
arXiv.org Artificial Intelligence
Apr-11-2023
- Country:
- Europe
- France > Hauts-de-France
- Italy (0.04)
- North America > United States
- Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe
- Genre:
- Research Report (0.82)
- Industry:
- Transportation > Air (0.40)