Goto

Collaborating Authors

Author response for " Fixing the train-test resolution discrepancy "

Neural Information Processing Systems

We thank the reviewers for their constructive feedback on the paper. Here we answer their main questions and comments. In addition, are the results shown significant? In particular, we have evaluated our approach for transfer learning for low-resource and/or fine-grained classification. Then (3) we use our method, i.e. we fine-tune the last Finally, we applied our method to a very large ResNeXt-101 32x48d from [Mahajan et al.









d010396ca8abf6ead8cacc2c2f2f26c7-AuthorFeedback.pdf

Neural Information Processing Systems

R2: Sparsity: Thank you for mentioning thatthe unification ofcoreparts ofthe sparseneuralnetwork literature14 combined with the neural architecture search problem is very nice. As a useful experiment to include to showcase15 this, we apply Algorithm 1 separately to each filter of the version of ResNet50 used in [1] and obtain state of the16 art performance atthe task oftraining sparse networks from scratchas described in [1]. R2: Theoretical Claims: We agree that this work will benefit from theoretical analysis which extends beyond25 decreasing the loss on the mini-batch with the SGD update. We thank you for the excellent suggestion of offering the additional motivation of31 performing an approximation in the backwards pass. We are happy to see that we may arrive at the same objec-32 tive in this way and we will include this exciting result.