Reviews: Diminishing Returns Shape Constraints for Interpretability and Regularization

Neural Information Processing Systems 

The authors argue that accelerating/decelerating return constraints can help model interpretability. To this end, they propose two methods: (i) an extension of input convex neural networks that supports monotonicity constraints; (ii) an extension of partial monotonic lattice models that supports concavity/convexity constraints. The proposed methods are compared to several constrained and unconstrained baselines on different datasets. Pros: - The paper is very well written, and definitely relevant to NIPS - Constraining the model to have accelerating returns w.r.t. It seems that the term "interpretability" here is used as a synonym of "visualizability".