Interpretable DNFs
Cooper, Martin C., Bousdira, Imane, Carbonnel, Clément
–arXiv.org Artificial Intelligence
A classifier is considered interpretable if each of its decisions has an explanation which is small enough to be easily understood by a human user. A DNF formula can be seen as a binary classifier $κ$ over boolean domains. The size of an explanation of a positive decision taken by a DNF $κ$ is bounded by the size of the terms in $κ$, since we can explain a positive decision by giving a term of $κ$ that evaluates to true. Since both positive and negative decisions must be explained, we consider that interpretable DNFs are those $κ$ for which both $κ$ and $\overlineκ$ can be expressed as DNFs composed of terms of bounded size. In this paper, we study the family of $k$-DNFs whose complements can also be expressed as $k$-DNFs. We compare two such families, namely depth-$k$ decision trees and nested $k$-DNFs, a novel family of models. Experiments indicate that nested $k$-DNFs are an interesting alternative to decision trees in terms of interpretability and accuracy.
arXiv.org Artificial Intelligence
May-28-2025
- Country:
- Europe
- France > Occitanie
- Haute-Garonne > Toulouse (0.04)
- Hérault > Montpellier (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- France > Occitanie
- North America > United States (0.04)
- Europe
- Genre:
- Research Report (0.50)