Goto

Collaborating Authors

 counterexample-guided learning


Counterexample-Guided Learning of Monotonic Neural Networks

Neural Information Processing Systems

The widespread adoption of deep learning is often attributed to its automatic feature construction with minimal inductive bias. However, in many real-world tasks, the learned function is intended to satisfy domain-specific constraints. We focus on monotonicity constraints, which are common and require that the function's output increases with increasing values of specific input features. We develop a counterexample-guided technique to provably enforce monotonicity constraints at prediction time. Additionally, we propose a technique to use monotonicity as an inductive bias for deep learning. It works by iteratively incorporating monotonicity counterexamples in the learning process. Contrary to prior work in monotonic learning, we target general ReLU neural networks and do not further restrict the hypothesis space. We have implemented these techniques in a tool called COMET. Experiments on real-world datasets demonstrate that our approach achieves state-of-the-art results compared to existing monotonic learners, and can improve the model quality compared to those that were trained without taking monotonicity constraints into account.


Review for NeurIPS paper: Counterexample-Guided Learning of Monotonic Neural Networks

Neural Information Processing Systems

Summary and Contributions: RE authors' feedback: I think this method is primarily useful for offline evaluation with no latency requirements. There are many such use cases justifying the relevance of this work. However, I think the authors should be clear and upfront about the scalability and latency issues with their proposed methods and mention that in the paper. The previous submission of this work that I reviewed did no have any timing details, and I'm happy the added that to the experimental section. My overall score remains the same.


Review for NeurIPS paper: Counterexample-Guided Learning of Monotonic Neural Networks

Neural Information Processing Systems

The reviewers agreed that this was an interesting and novel approach to imposing monotonicity, and that the paper was mostly well-written (although R4's review contains some suggested improvements). The main criticisms were that (i) the datasets in the experiments were small, and (ii) using an SMT solver at evaluation time might be too expensive for many applications. R3 also mentioned that the limitation to ReLU networks could be somewhat constraining. These issues, however, were agreed to be outweighed by the strengths of the paper, and all reviewers recommended acceptance. Please carefully read the reviews, and take them seriously when making edits: the paper is very good already, and while of course experiments should not be overhauled between submission and a final version, implementing some of the reviewers' suggestions (especially adding a more in-depth discussion of evaluation-time costs, and their impact on real-world systems) could improve it even further.


Counterexample-Guided Learning of Monotonic Neural Networks

Neural Information Processing Systems

The widespread adoption of deep learning is often attributed to its automatic feature construction with minimal inductive bias. However, in many real-world tasks, the learned function is intended to satisfy domain-specific constraints. We focus on monotonicity constraints, which are common and require that the function's output increases with increasing values of specific input features. We develop a counterexample-guided technique to provably enforce monotonicity constraints at prediction time. Additionally, we propose a technique to use monotonicity as an inductive bias for deep learning.