Goto

Collaborating Authors

 author response


Author response for " Fixing the train-test resolution discrepancy "

Neural Information Processing Systems

We thank the reviewers for their constructive feedback on the paper. Here we answer their main questions and comments. In addition, are the results shown significant? In particular, we have evaluated our approach for transfer learning for low-resource and/or fine-grained classification. Then (3) we use our method, i.e. we fine-tune the last Finally, we applied our method to a very large ResNeXt-101 32x48d from [Mahajan et al.


Author Response for ' Shaping Belief States with Generative Environment Models for RL '

Neural Information Processing Systems

We are grateful to all constructive and actionable feedback provided by the reviewers. We believe to have addressed the key concerns raised by the reviewers below. 's concerns with our main hypothesis as it has not We are working to improve our explanations in section 2.2 based on all feedback We emphasize that careful empirical experimentation in ML can also bring valuable insights to the community. Studying these factors require an intersectional empirical study such as this paper. Probabilistic models benefit more from overshoot than Deterministic models.


Author Response for The Unreasonable Effectiveness of Big Models for Semi Supervised Learning

Neural Information Processing Systems

We thank the reviewers for feedback, as well as efforts in reviewing. We respond to each comment below. Overall, there is no significant contribution to unsupervised pre-training. " The fact that our main contribution is a detailed procedure, rather than a theorem, architecture, or other artifact, We believe our contributions are significant. Indeed, R3 recognizes that "the simple semi-supervised framework is still I think it will inspire several future works." " While we believe ImageNet is a much more These results can be further improved with better augmentations during fine-tuning and an extra distillation step.



Submission 180: Author Response

Neural Information Processing Systems

We thank the reviewers for their thoughtful comments. Reviewers have described our work as "extremely important in that it provides a reality check for Reviewers' comments have been paraphrased for brevity. R3: It looks like the random image regularizer hurts in-domain performance. R3: Do other VQA datasets (e.g., GQA, VCR) have the same problem? R2: Do other datasets for OOD evaluation have similar problems like VQA-CP?


Author Response for Paper 6449

Neural Information Processing Systems

We thank the reviewers for their time and helpful feedback! Given this new literature however, we find that our original claims of novelty are still valid. II) we explained that the Neural-Adjoint (NA) is based directly upon the approach proposed in [21]. And therefore our original claimed contributions are still valid. Given the expanded related work, however, we do agree we should revise our NA branding.


Author Response to Reviews

Neural Information Processing Systems

Thank you for your time in reading the paper and the positive feedback! Below are responses for each reviewer. Thank you for your detailed reading of the paper and positive feedback! With even more runs, we expect this distinction will be even clearer. This is the core of the transfer learning question, and a central part of our paper.


Author Response for Paper 6449

Neural Information Processing Systems

We thank the reviewers for their time and helpful feedback! Given this new literature however, we find that our original claims of novelty are still valid. II) we explained that the Neural-Adjoint (NA) is based directly upon the approach proposed in [21]. And therefore our original claimed contributions are still valid. Given the expanded related work, however, we do agree we should revise our NA branding.


On Robustness of Principal Component Regression: Author Response

Neural Information Processing Systems

We begin by thanking all reviewers for their extremely encouraging and helpful responses. We agree that the fact we do PCR on both the training and testing covariates should be more explicitly placed in the context of transductive semi-supervised learning. We have strived to interpret our major theorem results (Thm 4.2 & Thm 5.1) by: (i) providing examples of natural generating Proposition 4.2, should be tight). Their empirical results support our theoretical guarantees.


Author Response: Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolv-1 ability

Neural Information Processing Systems

We think our contribution relative to the breakthrough work of Diakonikolas et al. is not just that our algorithm is proper Also, we agree that it is hard to do justice to all the technical ingredients in just 8 pages. However the results actually build on each other, e.g.