Goto

Collaborating Authors

 diminishing return shape constraint


Diminishing Returns Shape Constraints for Interpretability and Regularization

Maya Gupta, Dara Bahri, Andrew Cotter, Kevin Canini

Neural Information Processing Systems

Similarly, a model that predicts the time it will take a customer to grocery shop should decrease in the number of cashiers, but each addedcashierreduces average wait time by less. In both cases, we would like to be able to incorporate this prior knowledge by constraining the machine learned model's output to have a diminishing returns response to the size of the apartment or number of cashiers.


Diminishing Returns Shape Constraints for Interpretability and Regularization

Neural Information Processing Systems

We investigate machine learning models that can provide diminishing returns and accelerating returns guarantees to capture prior knowledge or policies about how outputs should depend on inputs. We show that one can build flexible, nonlinear, multi-dimensional models using lattice functions with any combination of concavity/convexity and monotonicity constraints on any subsets of features, and compare to new shape-constrained neural networks. We demonstrate on real-world examples that these shape constrained models can provide tuning-free regularization and improve model understandability.


Reviews: Diminishing Returns Shape Constraints for Interpretability and Regularization

Neural Information Processing Systems

The authors argue that accelerating/decelerating return constraints can help model interpretability. To this end, they propose two methods: (i) an extension of input convex neural networks that supports monotonicity constraints; (ii) an extension of partial monotonic lattice models that supports concavity/convexity constraints. The proposed methods are compared to several constrained and unconstrained baselines on different datasets. Pros: - The paper is very well written, and definitely relevant to NIPS - Constraining the model to have accelerating returns w.r.t. It seems that the term "interpretability" here is used as a synonym of "visualizability".


Diminishing Returns Shape Constraints for Interpretability and Regularization

Gupta, Maya, Bahri, Dara, Cotter, Andrew, Canini, Kevin

Neural Information Processing Systems

We investigate machine learning models that can provide diminishing returns and accelerating returns guarantees to capture prior knowledge or policies about how outputs should depend on inputs. We show that one can build flexible, nonlinear, multi-dimensional models using lattice functions with any combination of concavity/convexity and monotonicity constraints on any subsets of features, and compare to new shape-constrained neural networks. We demonstrate on real-world examples that these shape constrained models can provide tuning-free regularization and improve model understandability. Papers published at the Neural Information Processing Systems Conference.