insightful comment
We would like to thank the reviewers for their detailed and insightful comments
We would like to thank the reviewers for their detailed and insightful comments. We have resolved the concerns with SGD. We have similar results with Adam. Gaussian sketch performs better than SVRG on problem (1), and is robust to the choice of hyperparameters. We will include these results in the final version.
Reviewer # 1: We appreciate many insightful comments from this reviewer
Reviewer #1: We appreciate many insightful comments from this reviewer. We have included more scenarios in the paper. Here are three of them. In this paper, SM stands for the standard two-layer GCN model. In the last few days, we have tried very hard to carry out more experiments on other datasets including'Citeseer', and Table 1: Mean Prediction Accuracy for'Citeseer' Figure 1: Boxplot of RMSEs in real data analysis Reviewer #2: We appreciate many insightful comments from this reviewer.
propose the first finite-time system identification algorithm for partially observable linear dynamical systems (LDS)
We thank the reviewers for their effort and insightful comments during these unprecedented times. LQR & LQG are among few continuous settings where the optimal policies exist (and mainly have closed form) [1]. Therefore, we do not see why this paper would be less relevant to our community. If PE is absent, we provide two general algorithms stated in Cor. The agent uses a warm-up period of O ( T) after which it commits to a controller yielding a regret of T .
We would like to thank each of the reviewers for the constructive and insightful comments on our manuscript
We would like to thank each of the reviewers for the constructive and insightful comments on our manuscript. Also, we will further polish our paper based on your suggestions to address other writing issues. The reasons are discussed in lines 308-315 in our paper. R3, R5: Explanation on why to use self-attention. In addition, we agree that it is more realistic to model label uncertainty.
parts of the proposed method might not be explained enough, which might make it difficult to appreciate some of the
We thank all the reviewers for the responses and detailed comments. The first difference between the earlier SRM versus our SSTL lies in defining the shared space. Empirical studies in [3] also showed that the original forms of SRM and HA ( i.e., the Y es, this subject ordering can matter, but this is fairly standard -- i.e., The revised version will explicitly summarize the entire training and performance processes. Reviewer 1: Thank you for your insightful comments. Instead, we said that'scatter matrices The revision will address all of those comments.
[ Submission 1194: " DISK" ] We thank all reviewers for their insightful comments, and address their concerns
R1: DISK is based on previous work (U-Net, SuperPoint) and only offers moderate innovation. We will clarify this in the paper. We tuned inference parameters (NMS window & RANSAC settings) by search, as described in L194-197. R1, R3, R5: What is the contribution of individual components of the pipeline? Experimentally, we observe that 19.9% of features from grid selection This has three potential downsides.
We thank all the reviewers for their insightful comments, suggestions, and references
We thank all the reviewers for their insightful comments, suggestions, and references. Novelty of tandem loss: it is not new, but we were not aware of the prior work, we thank Reviewer 2 for bringing it up. While most of the computed bounds are non-vacuous, they look to be not that tight. Also a discussion of potential ways to obtain tighter bond values, or whether there is a fundamental limitation. We provide some discussion in Sections 3.2 and 4.4.