Goto

Collaborating Authors

 incorporating interpretable output constraint


Incorporating Interpretable Output Constraints in Bayesian Neural Networks

Neural Information Processing Systems

Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.


Review for NeurIPS paper: Incorporating Interpretable Output Constraints in Bayesian Neural Networks

Neural Information Processing Systems

Additional Feedback: Post-response update: The author response adressed my concerns very well, and the paper is good enough to be accepted, despite the lacking novelty. I am increasing my score to 7. ---- The paper proposes a new more general formalism to handle output constraints in BNNs. The space of constrained neural networks is already crowded, and while section 2 does make a good overview of the differences, it would greatly improve the paper to also define mathematically the differences in competing constraining methods and their scopes. Overall I had hard time understanding the contraint definitions (see below for minor comments). The constraint formalism needs to be explicated better. I could not follow the math anymore.


Review for NeurIPS paper: Incorporating Interpretable Output Constraints in Bayesian Neural Networks

Neural Information Processing Systems

The reviewers highlighted the strengths of your experiments. While some questioned the novelty, I am definitely persuaded by the novelty and utility of your approach. I think you have made a very worthwhile contribution to the practical use of BNNs on many problems.


Incorporating Interpretable Output Constraints in Bayesian Neural Networks

Neural Information Processing Systems

Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.