Review for NeurIPS paper: Learning Invariances in Neural Networks from Training Data

Neural Information Processing Systems 

Weaknesses: I have two major concerns with this submission. First, the paper claims that prior work on data augmentation needs to know the invariance of interest a priori. However, this paper requires exactly the same thing, as the invariance of interest must be "expressable" by the learnable mapping. For instance, the authors prescribe a transformation that restrict themselves to translation, rotation, scaling and shearing invariances during training; at testing, rotation is precisely the nuisance transformation at play. Second, the proposed test-time data augmentation is a well known technique, also often used to learn equi/invariant classifiers.