detailed comment
detailed comments
We thank all reviewers for their valuable comments. We'll further improve in the final version. Q1: Beyond regression tasks: In this paper, we focus on regression tasks. It's indeed an interesting topic to systematically investigate on the robustness of We note that there is some recent work (e.g., [*1]) that studies the robustness of the MMD estimators, Q3. Results of larger networks: Guo et al. (ICML 2017) argued that the miscalibration was due to the sheer size of We'll make it clearer in the final version.
We appreciate the time and efforts invested by the reviewers for examining our work and providing detailed comments
We will improve the overall presentation and add more details for better accessibility. Thank you for your review and clarifying questions. For interpretability, we chose the bank dataset (c.f. Feature set description is provided in Appendix C.1. Circles in Figure 2(c) signify that Katz centrality for those agents was increased when perturbing structure for testing.
Reviewer 1 Detailed comments
We thank the reviewer for the comments. Detailed comments: I'd like to see time comparisons for training and inference GPUs/TPUs don't support these representations so this would require implementation on FPGAs which would be a In our hardware (ASIC) experiments, we've seen a Reviewer 2 Improvements: It would be helpful to clarify data formats of each step in Table 2. It would be helpful to clarify that the weight update is applied to 1/N of weights in Table 3. Author response: We thank the reviewer for the comments. Table 2 and 3 will be updated. Do the frameworks now have to support 2 more data types?
detailed comments
We thank all reviewers for their valuable comments. We'll further improve in the final version. Q1: Beyond regression tasks: In this paper, we focus on regression tasks. It's indeed an interesting topic to systematically investigate on the robustness of We note that there is some recent work (e.g., [*1]) that studies the robustness of the MMD estimators, Q3. Results of larger networks: Guo et al. (ICML 2017) argued that the miscalibration was due to the sheer size of We'll make it clearer in the final version.
Reviews: Model-Agnostic Private Learning
The paper considers a new differentially private learning setting, that receives a collection of unlabeled public data, on top of the labelled private data of interests and assumes that the two data sets are drawn from the same distribution. The proposed technique allows the use of (non-private) agnostic PAC learners as black boxes oracles, which, when combining with and adapts to the structure of the data sets. The idea is summarized below: 1. Do differentially private model-serving in a data-adaptive fashion, through sparse vector'' technique and subsample-and-aggregate''. This only handles a finite number of classification queries. It behaves similarly to 1 using the properties of an agnostic PAC learner, but can now handle an unbounded number of classification queries.