Export Reviews, Discussions, Author Feedback and Meta-Reviews
–Neural Information Processing Systems
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The submission describes a convex deep learning formulation that leverages a number of key ideas. First, a training objective is proposed that explicitly includes the outputs of hidden layers as variables to be inferred via optimization. These are linked to linear responses via a loss function, and the net objective is the sum of these loss functions across the layers, plus some regularization terms. Next, a number of changes of variables are performed in order to reparameterize the objective into a convex form, heavily leveraging the representer theorem and the idea of value regularization. We are left with a convex objective in terms of three different matrices (per layer) to optimize. In particular, one of these matrices is a nonparametric'normalized output kernel' matrix, which takes the place of optimizing over the hidden layer outputs directly; however, this leads to a transductive method where we must simultaneously solve the optimization for training and test inputs.
Neural Information Processing Systems
Oct-2-2025, 19:17:27 GMT