Goto

Collaborating Authors

 final paper


in which the model keeps predicting the class of each unlabeled sample and learns from the feedback that whether 2

Neural Information Processing Systems

We thank all reviewers for their valuable comments. The reviewers are generally satisfied with our writing and experiments. Other concerns are minor - we carefully respond below and will revise the paper accordingly. Can one-bit supervision reduce annotation costs? According to the authors of ILSVRC2012 [Russakovsky et al., IJCV'15], the average time for a full-bit annotation is A1: Please refer to the common question.





the final paper (ie big picture description without causal discovery jargon, intuitive sepset (in)consistency examples, 3

Neural Information Processing Systems

We would like to thank the reviewers for their insightful comments and suggestions. Reply to Reviewer 1 ( "discuss computational complexity"): The reviewer is right in pointing out that the complexity for ensuring sepset consistency needs to be clarified. PC algorithm runs in exponential time in the worst case but usually in polynomial time on sparse DAGs]. The new complexity is thus at worst linear in all cases. Reply to Reviewer 2 ( "provide additional experimental evaluation on standard benchmarks"): Bayesian Network repository, which display a clearer performance improvement over standard PC, see Figure below.


R1, R6: Additional analyses/ablations for L sparse and L

Neural Information Processing Systems

We thank the reviewers for their thoughtful comments and suggestions. Below, we address the reviewers' comments individually. We will add these analyses to the main text. Keypoints can indeed "jump" between frames, but we show in a new analysis (Fig. D) that the Jumping thus seems to be a minor issue. R1: What is the size of the feature vector in CNN-VRNN?


Reviewer 1: 2

Neural Information Processing Systems

We greatly appreciate the feedback of the reviewers. We discuss the specific concerns of the reviewers below. We will include this discussion into the paper. We will include empirical results of a gaussian process-based bandit in the final paper. We will look into the techniques of Qian and Y ang (2016) for adaptivity to the smoothness.




8cbd005a556ccd4211ce43f309bc0eac-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the three reviewers for their constructive comments. The following are our responses to reviewers' comments. Re. the interpretation should be regarded as a good motivation: "which might be more intuitive than the "which offers a clear way to handle the nonlinear data with geometric structures." We also add two latest baselines "Deep Table II lists the error rates on classification tasks (" We will add the above discussions in the final paper.