neurips community
that we tackle an important problem of interest to the NeurIPS community, and acknowledge our extensive/insightful
Thank you for the thoughtful feedback and comments; we are delighted to see this positive response. ARL would only improve the performance for groups that are computationally identifiable groups over (x,y). Our experiments on label bias (Figure 1(b)) shed some light on this. Thank you for your thoughtful comment. This along with your other comment on equipping robustness ignited interesting discussions amongst the authors.
reproducibility is of central importance to the whole NeurIPS community and was also unanimously identified during 2
First of all, we thank all reviewers for their valuable time and feedback. We thank the reviewers for pointing out typos and grammatical errors, which we of course have fixed now. We are afraid that the reviewer might have misunderstood some parts of the paper. We refer to the original paper for further details about the approximation of the variational posterior. We have made this more clear in the main paper now.
] As suggested by Reviewer 1, we will provide
We thank the reviewers for their valuable suggestions. Please find our answers for each reviewer ( R) below. To apply our methodology to other programming environments (e.g., Python problems), one should first establish Nevertheless, the results demonstrate the benefits of our approach. So, Z3 seems to be very effective by jointly considering all the constraints. L240-242, we analyzed a random sample of 100 outputs per reference task and we will clarify this in the updated paper.
proxy support measure in the formulation is a continuous measure, and in the implementation we choose a uniform
We thank the reviewers for their time and their valuable feedback and thoughtful suggestions. Below are our answers to the reviewers' comments, grouped by topic. To the best of our knowledge, all prior works either consider fixed-support (e.g., [ 's concern of lack of conclusion on which method to choose, we have included a discussion We will add recommendation for higher-dimensional situations to accompany the results in Table B.1 and's concern about recovery methods being costly, method (d) in fact comes at almost no cost Even for continuous regularized OT distance [5] this is not well-understood. We agree these are areas for future work. Gaussians, whose low sample complexity favors discretized methods more than other settings.
relevant to the NeurIPS community " (R1) and to be addressing " a well-motivated problem " (R3) in " an important area "
We thank the reviewers for their useful and thoughtful feedback. We are glad to see that our work was found " highly That is, " the experiments show that the proposed method outperforms the baselines and We address the reviewers' comments below and ' we refer to our method for noisy inference. " How does the estimation error [of Will it change the claims of the paper? Thus, our claims are unaffected. While we share the reviewers' desire for convergence guarantees, we also note that Thus, we go from Eq. 7 to Eq. 8 by swapping Re. the noiseless case, see Figure 1 In Eq. 7, the prior density is included in The ablation studies are mentioned in the text, and fully reported in the Supplement (E.3, 'Lesion study').
We thank the reviewers for the thoughtful feedback in these difficult times caused by the global COVID-19 pandemic
We thank the reviewers for the thoughtful feedback in these difficult times caused by the global COVID-19 pandemic. QM9 is used for training, the model must be based on LCAO, and QDF achieved high extrapolation performance. We emphasize that even this LDA-like HK map achieved high extrapolation performance. We will address this in future work. Of course, QDF can be proposed without a comparison to GCN.