Goto

Collaborating Authors

 Unsupervised or Indirectly Supervised Learning


Reviews: Leveraging Labeled and Unlabeled Data for Consistent Fair Binary Classification

Neural Information Processing Systems

Three reviewers who are all good experts for this paper found the paper interesting, novel, compelling, and well-written. With such a difficult topic as fairness, it was particularly helpful that the authors were able to discuss their assumptions, results, and proofs so clearly, and that definitely adds value to the work. The authors' response was appreciated and was found to be helpful, but reviewers expressed some concern in discussion about adding too many new results they didn't have a chance to review, so while we hope the authors can address some of the reviewers suggestions in the final paper, they are encouraged not to add too much stuff that wasn't reviewed, but instead to consider expanding on some of that for a follow-on submission.


comments on their remarks and questions. combination of the guess loss with the additive noise beats the out-of-the-box CycleGAN on the GTA dataset in terms

Neural Information Processing Systems

We cannot thank the reviewers enough for their valuable feedback on our work. Reviewers 1 and 2: Combine guess loss with additive noise. Most recent advances in adversarial defense methods address "black-box attacks" performed by a The latter incorporates adversarial examples during training to increase the model's robustness to the attack. Therefore the reconstructed image can serve as an adversarially perturbed example of the non-adversarial input image. Reviewer 3: Novelty is not enough as most of the proposed solution or observations are already published.


Contrastive learning of global and local features for medical image segmentation with limited annotations

Neural Information Processing Systems

A key requirement for the success of supervised deep learning is a large labeled dataset - a condition that is difficult to meet in medical image analysis. Selfsupervised learning (SSL) can help in this regard by providing a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations. Contrastive learning, a particular variant of SSL, is a powerful technique for learning image-level representations. In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues. Specifically, we propose (1) novel contrasting strategies that leverage structural similarity across volumetric medical images (domain-specific cue) and (2) a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation (problem-specific cue). We carry out an extensive evaluation on three Magnetic Resonance Imaging (MRI) datasets. In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques. When combined with a simple data augmentation technique, the proposed method reaches within 8% of benchmark performance using only two labeled MRI volumes for training, corresponding to only 4% (for ACDC) of the training data used to train the benchmark.


Reviews: Quality Aware Generative Adversarial Networks

Neural Information Processing Systems

I have read it carefully. The new experiments look good, but the authors do not seem to respond to my concern over SSIM metric between unpaired images. I keep my original review and rating. Given all the prior works that smooth GAN training, the idea that integrates image quality assessment metrics with GANs sounds interesting. From the experiment samples, it seems that the quality aware gan does improve the sample quality, the generated CelebA and STL images look sharp.


Reviews: Quality Aware Generative Adversarial Networks

Neural Information Processing Systems

The paper proposes a novel way to regularize training of deep adversarial generative models for natural images. The proposal is based on using the image quality metrics. While many different ways of stabilizing and regularizing GAN training were proposed in prior work, most of which based on various gradient penalties related to the Lipschitzness, this submission proposes an idea which is significantly different and novel. The paper evaluates the new method on three reasonably challenging datasets (CIFAR-10, STL-10, CelebA) and quantitatively shows objective advantages to other methods (in terms of FID and IS). The field of GANs and in particular various ways to stabilize their training has been recently attracting perhaps excessive amount of attention with many papers proposing multiple methods very similar in nature.


92d1e1eb1cd6f9fba3227870bb6d7f07-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their fruitful comments! Response to Reviewer 2: We predict characters for Librispeech/Libri-light. Thank you for the pointer! "when the official LibriSpeech LM... is incorporated into decoding, it is not clear whether the experiments still represent We will also try to make it more self-contained given the space restrictions. "I'm not convinced that this training works well conceptually." "... for ASR, we have a lot of transcribed data, and we can make a strong ASR model and perform transfer learning." "... how to extract K detractors." - The distractors are quantized latent speech representations sampled from masked If another masked time-step uses the same quantized latent, then it won't be sampled. "The paper would have been significantly different in terms of quality had you applied you approach to some standard This follows other recent work on semi-supervised methods for speech such as "Improved Noisy Student Training Synnaeve et al., 2020" which achieve some of the strongest results.


Review for NeurIPS paper: Few-Cost Salient Object Detection with Adversarial-Paced Learning

Neural Information Processing Systems

This paper received reviews from 3 expert reviewers. The reviewers appreciated the interesting task (few cost saliency detection) and the use of self-paced learning combined with generative adversarial learning. After considering the authors' response, the reviewers refined their positions on the paper. R2's comments regarding semi-supervised learning remain valid. The authors would be encouraged to refine the presentation of this and use of terms.


Review for NeurIPS paper: Training Generative Adversarial Networks with Limited Data

Neural Information Processing Systems

Summary and Contributions: This work proposes to address the problem of limited data in GAN training with discriminator augmentation (DA), a technique which enables most standard data augmentation techniques to be applied to GANs without leaking them into the learned distribution. The method is simple, yet effective: non-leaking differentiable transformations are applied to real and fake images before being passed through the discriminator, both during discriminator and generator updates. To make transformations non-leaking, it is proposed to apply them with some probability p 1 such that the discriminator will eventually be able to discern the true underlying distribution. One challenge introduced with this technique is that different datasets require different amounts of augmentation depending on their size, and as such, expensive grid search is required for optimization. To eliminate the need for this search step an adaptive version called adaptive discriminator augmentation (ADA) is introduced.


Review for NeurIPS paper: Training Generative Adversarial Networks with Limited Data

Neural Information Processing Systems

All reviewers found this work interesting and addressing an important issue in GAN training. The authors did a great job in presenting their analyses and experiments. Please take the reviewers' comments into account in your next revision (particularly some presentation advices). The authors are encouraged to cite the following work for a similar "non-leaking" DA: https://arxiv.org/abs/2006.05338 We did not bring this out during discussion nor used this for or against the authors.)


Supplementary Materials - VIME: Extending the Success of Self-and Semi-supervised Learning to Tabular Domain

Neural Information Processing Systems

Self-supervised learning trains an encoder to extract informative representations on the unlabeled data. Semisupervised learning uses the trained encoder in learning a predictive model on both labeled and unlabeled data. Figure 3: The proposed data corruption procedure. In the experiment section of the main manuscript, we evaluate VIME and its benchmarks on 11 datasets (6 genomics, 2 clinical, and 3 public datasets). Here, we provide the basic data statistics for the 11 used datasets in Table 1.