Checklist 1. For all authors (a)

Neural Information Processing Systems 

Do the main claims made in the abstract and introduction accurately reflect the paper's If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Y es] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? Did you include the total amount of compute and the type of resources used (e.g., type Did you include any new assets either in the supplemental material or as a URL? [N/A] Did you discuss whether and how consent was obtained from people whose data you're If you used crowdsourcing or conducted research with human subjects... (a) The full version of the Table 1 is given in Table 3. That is, the following relationships hold: 2g p uq " sup This formulation can be found in Lemma 3.1 of Jenatton et al. First we compute the gradient g p uq " ÿ A.7 Log Sum First, we compute the derivative g puq " log p? u ` ε q ùñ g " 0, (46) which gives the inverse mapping? However, it is separable, and in one dimension we have g p uq " null tu ą 0u . " au ě m uq, (69) where ConvpAq is the convex hull of the set A. Similarly define s S Running for 1000 epochs, for example, gets the fraction of nonzeros down to around 0.1, at a slight expense of accuracy.