Incorporating Interpretable Output Constraints in Bayesian Neural Networks
Yang, Wanqian, Lorch, Lars, Graule, Moritz A., Lakkaraju, Himabindu, Doshi-Velez, Finale
Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
Oct-21-2020
- Country:
- Europe > Switzerland
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > Massachusetts (0.28)
- Canada > Ontario
- Genre:
- Research Report (1.00)
- Industry:
- Banking & Finance > Credit (0.66)
- Health & Medicine (1.00)
- Law > Criminal Law (0.48)