Decision Tree Search as a Markov Decision Problem
Kohler, Hector, Akrour, Riad, Preux, Philippe
–arXiv.org Artificial Intelligence
Finding an optimal decision tree for a supervised learning task is a challenging combinatorial problem to solve at scale. It was recently proposed to frame the problem as a Markov Decision Problem (MDP) and use deep reinforcement learning to tackle scaling. Unfortunately, these methods are not competitive with the current branch-and-bound state-of-the-art. We propose instead to scale the resolution of such MDPs using an information-theoretic tests generating function that heuristically, and dynamically for every state, limits the set of admissible test actions to a few good candidates. As a solver, we show empirically that our algorithm is at the very least competitive with branch-and-bound alternatives. As a machine learning tool, a key advantage of our approach is to solve for multiple complexity-performance trade-offs at virtually no additional cost. With such a set of solutions, a user can then select the tree that generalizes best and which has the interpretability level that best suits their needs, which no current branch-and-bound method allows.
arXiv.org Artificial Intelligence
Jan-22-2024
- Country:
- Europe > France
- Hauts-de-France > Nord > Lille (0.04)
- North America > United States
- New York (0.04)
- Europe > France
- Genre:
- Research Report (0.64)