Neuro-Symbolic Rule Lists

Xu, Sascha, Walter, Nils Philipp, Vreeken, Jilles

arXiv.org Machine Learning 

Machine learning models deployed in sensitive areas such as healthcare must be interpretable to ensure accountability and fairness. However, learning such rule lists presents significant challenges. Existing methods based on combinatorial optimization require feature pre-discretization and impose restrictions on rule size. Neuro-symbolic methods use more scalable continuous optimization yet place similar pre-discretization constraints and suffer from unstable optimization. We formulate a continuous relaxation of the rule list learning problem that converges to a strict rule list through temperature annealing. Machine learning models are increasingly used in high-stakes applications such as healthcare (Deo, 2015), credit risk evaluation (Bhatore et al., 2020), and criminal justice (Lakkaraju & Rudin, 2017), where it is vital that each decision is fair and reasonable. Proxy measures such as Shapley values can give the illusion of interpretability, but are highly problematic as they can not faithfully represent a non-additive models decision process (Gosiewska & Biecek, 2019). Instead, Rudin (2019) argues that it is crucial to use inherently interpretable models, to create systems with human supervision in the loop (Kleinberg et al., 2018). For particularly sensitive domains such as stroke prediction or recidivism, so called Rule Lists are a popular choice (Letham et al., 2015) due to their fully transparent decision making. A rule list predicts based on nested "if-then-else" statements and naturally aligns with the human-decision making process. Each rule is active if its conditions are met, e.g. " if Thalassemia = normal Resting bps < 151 ", and carries a respective prediction, i.e. " then P ( Disease) = 10% ".

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found