Unsupervised or Indirectly Supervised Learning
Review for NeurIPS paper: Graph Random Neural Networks for Semi-Supervised Learning on Graphs
Weaknesses: The proposed methods are not that novel. More specifically: (1) It seems that the consistency regularization is a general framework that can combine with other data augmentation methods, such as dropedge, and sampling algorithms. It would be better if the authors can also try these combinations, instead of only adopting their proposed dropnode augmentation. Thus, it would be better if the authors can provide a curve showing the performance of the proposed framework against other baselines under different training data percentage. Also, better to combine these methods with some advanced base GNN.
Review for NeurIPS paper: Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation
Weaknesses: - The technical novelty of the proposed method is somewhat incremental since it is largely based on the work from [14] with some modifications to the generator and the discriminator architectures. The word-level training feedback in the discriminator seems to be the main technical contribution, but is not ground-breaking as it extends the auxiliary classifier in conditional GAN with multiple classes (i.e. Specifically, only the nouns and adjectives are chosen manually as text-relevant attributes, which convey a very limited context of general descriptions. Although it may allow a fine-control of the image content in a limited context, it reduces the capability of aligning rich context of the text to the image, often available in approaches learning to encode the whole sentence (e.g. Although authors made some justifications in Section 3.2.1 of using heuristic approach, it does not feel that this assumption holds in general. Current comparisons are mostly focused on ManiGAN.
Review for NeurIPS paper: Learning Semantic-aware Normalization for Generative Adversarial Networks
R3 and R4 rate the paper top 50% papers, while R1 votes the paper marginally below the bar. While R1 initially raised several concerns on the paper's novelty side, R1 upgrades the rating of the paper since the rebuttal addresses the concerns. After consolidating the reviews and rebuttal, the AC finds the proposed method interesting. The channel grouping and normalization based on the filter similarity is new for generator design, and the results and analysis presented in the paper support the claim. The AC determines that the paper has merits to be published in the NeurIPS conference and would like to recommend its acceptance.
Review for NeurIPS paper: Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning
Strengths: 1) This paper considers an ignorable case in SSL where not all unlabeled data should be treated equally during training. Though automatic weight tuning for different samples is already studies in supervised learning setting, it is new in SSL context to my best knowledge. Therefore, the motivation is clear and valid. The proposed algorithm is simple and practical and demonstrates benefits compared to different baselines, so the paper worked towards its motivation. Influence function is a tool of measuring models' dependency on samples in train set.
Review for NeurIPS paper: Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning
After rebuttal, 2 reviewers initially inclined to reject the paper raise their grades, leading to a consensus among reviewers for weak acceptance. Although their remains room for improvements, the AC agrees that the contributions are solid and interesting for the community, and therefore recommends acceptance. The authors are highly encouraged to update the final version of the paper based on reviewers' comments.
Filter, Obstruct and Dilute: Defending Against Backdoor Attacks on Semi-Supervised Learning
Wang, Xinrui, Geng, Chuanxing, Wan, Wenhai, Li, Shao-yuan, Chen, Songcan
Recent studies have verified that semi-supervised learning (SSL) is vulnerable to data poisoning backdoor attacks. Even a tiny fraction of contaminated training data is sufficient for adversaries to manipulate up to 90\% of the test outputs in existing SSL methods. Given the emerging threat of backdoor attacks designed for SSL, this work aims to protect SSL against such risks, marking it as one of the few known efforts in this area. Specifically, we begin by identifying that the spurious correlations between the backdoor triggers and the target class implanted by adversaries are the primary cause of manipulated model predictions during the test phase. To disrupt these correlations, we utilize three key techniques: Gaussian Filter, complementary learning and trigger mix-up, which collectively filter, obstruct and dilute the influence of backdoor attacks in both data pre-processing and feature learning. Experimental results demonstrate that our proposed method, Backdoor Invalidator (BI), significantly reduces the average attack success rate from 84.7\% to 1.8\% across different state-of-the-art backdoor attacks. It is also worth mentioning that BI does not sacrifice accuracy on clean data and is supported by a theoretical guarantee of its generalization capability.
Review for NeurIPS paper: Graph Stochastic Neural Networks for Semi-supervised Learning
Weaknesses: This paper combines latent variable models with GNNs, it's not novel enough and there are many previous works with similar ideas in graph generation. The difference is that the formulation of this paper is more like a conditional generative model and targets at node classification tasks. Based on the implementation of the method, I think the model is similar to RGCN in some aspects. Undoubtedly, there are differences that the model does not directly learn a Gaussian representation but instead samples from a Gaussian latent variable and concatenates it with the features of the node. However, both aim to inject some noise and in essence decrease the information between the representation and the original node feature so that the model only captures the key attributes and thus making the model more robust than vanilla GNNs.
Review for NeurIPS paper: Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference
This paper focuses on the problem of leveraging unlabelled data to generate better estimates of fairness metrics given limited labelled data. All three reviewers agree that the manuscript makes a valuable contribution and is conceptually and mathematically sound. The significance of the contribution (an auditor tool only, instead of an auditor plus a mitigation tool) is however at the low side.