incorporating interpretable output constraint
Incorporating Interpretable Output Constraints in Bayesian Neural Networks
Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
Review for NeurIPS paper: Incorporating Interpretable Output Constraints in Bayesian Neural Networks
Additional Feedback: Post-response update: The author response adressed my concerns very well, and the paper is good enough to be accepted, despite the lacking novelty. I am increasing my score to 7. ---- The paper proposes a new more general formalism to handle output constraints in BNNs. The space of constrained neural networks is already crowded, and while section 2 does make a good overview of the differences, it would greatly improve the paper to also define mathematically the differences in competing constraining methods and their scopes. Overall I had hard time understanding the contraint definitions (see below for minor comments). The constraint formalism needs to be explicated better. I could not follow the math anymore.
Incorporating Interpretable Output Constraints in Bayesian Neural Networks
Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.