Regional Tree Regularization for Interpretability in Black Box Models
Wu, Mike, Parbhoo, Sonali, Hughes, Michael, Kindle, Ryan, Celi, Leo, Zazzi, Maurizio, Roth, Volker, Doshi-Velez, Finale
--The lack of interpretability remains a barrier to the adoption of deep neural networks. Recently, tree regularization has been proposed to encourage deep neural networks to resemble compact, axis-aligned decision trees without significant compromises in accuracy. However, it may be unreasonable to expect that a single tree can predict well across all possible inputs. In this work, we propose regional tree regularization, which encourages a deep model to be well-approximated by several separate decision trees specific to predefined regions of the input space. Practitioners can define regions based on domain knowledge of contexts where different decision-making logic is needed. Across many datasets, our approach delivers more accurate predictions than simply training separate decision trees for each region, while producing simpler explanations than other neural net regularization schemes without sacrificing predictive power . Two healthcare case studies in critical care and HIV demonstrate how experts can improve understanding of deep models via our approach. I NTRODUCTION Deep models have become the state-of-the-art in applications ranging from image classification [1] to game playing [2], and are poised to advance prediction in real-world domains such as healthcare [3]-[5]. However, understanding when a model's outputs can be trusted and how the model might be improved remains a challenge. Without interpretability, humans are unable to incorporate their domain knowledge and effectively audit predictions. As such, many efforts have been devoted to extracting explanation from deep models post-hoc. Prior work has focused on two opposing regimes. Unfortunately, if the explanation is simple enough to be understandable, then it is unlikely to be faithful to the deep model across all inputs. In contrast, works on local explanation (e.g. These explanations lack generality, as isolated glimpses to the model's behavior can fail to capture larger patterns.
Aug-13-2019
- Country:
- Europe > Switzerland (0.14)
- North America > United States (0.14)
- Genre:
- Research Report (0.40)
- Industry:
- Health & Medicine > Therapeutic Area
- Immunology > HIV (0.49)
- Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area
- Technology: