Goto

Collaborating Authors

 Unsupervised or Indirectly Supervised Learning


Reviews: Quality Aware Generative Adversarial Networks

Neural Information Processing Systems

I have read it carefully. The new experiments look good, but the authors do not seem to respond to my concern over SSIM metric between unpaired images. I keep my original review and rating. Given all the prior works that smooth GAN training, the idea that integrates image quality assessment metrics with GANs sounds interesting. From the experiment samples, it seems that the quality aware gan does improve the sample quality, the generated CelebA and STL images look sharp.


Reviews: Quality Aware Generative Adversarial Networks

Neural Information Processing Systems

The paper proposes a novel way to regularize training of deep adversarial generative models for natural images. The proposal is based on using the image quality metrics. While many different ways of stabilizing and regularizing GAN training were proposed in prior work, most of which based on various gradient penalties related to the Lipschitzness, this submission proposes an idea which is significantly different and novel. The paper evaluates the new method on three reasonably challenging datasets (CIFAR-10, STL-10, CelebA) and quantitatively shows objective advantages to other methods (in terms of FID and IS). The field of GANs and in particular various ways to stabilize their training has been recently attracting perhaps excessive amount of attention with many papers proposing multiple methods very similar in nature.


92d1e1eb1cd6f9fba3227870bb6d7f07-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their fruitful comments! Response to Reviewer 2: We predict characters for Librispeech/Libri-light. Thank you for the pointer! "when the official LibriSpeech LM... is incorporated into decoding, it is not clear whether the experiments still represent We will also try to make it more self-contained given the space restrictions. "I'm not convinced that this training works well conceptually." "... for ASR, we have a lot of transcribed data, and we can make a strong ASR model and perform transfer learning." "... how to extract K detractors." - The distractors are quantized latent speech representations sampled from masked If another masked time-step uses the same quantized latent, then it won't be sampled. "The paper would have been significantly different in terms of quality had you applied you approach to some standard This follows other recent work on semi-supervised methods for speech such as "Improved Noisy Student Training Synnaeve et al., 2020" which achieve some of the strongest results.


Review for NeurIPS paper: Few-Cost Salient Object Detection with Adversarial-Paced Learning

Neural Information Processing Systems

This paper received reviews from 3 expert reviewers. The reviewers appreciated the interesting task (few cost saliency detection) and the use of self-paced learning combined with generative adversarial learning. After considering the authors' response, the reviewers refined their positions on the paper. R2's comments regarding semi-supervised learning remain valid. The authors would be encouraged to refine the presentation of this and use of terms.


Review for NeurIPS paper: Training Generative Adversarial Networks with Limited Data

Neural Information Processing Systems

Summary and Contributions: This work proposes to address the problem of limited data in GAN training with discriminator augmentation (DA), a technique which enables most standard data augmentation techniques to be applied to GANs without leaking them into the learned distribution. The method is simple, yet effective: non-leaking differentiable transformations are applied to real and fake images before being passed through the discriminator, both during discriminator and generator updates. To make transformations non-leaking, it is proposed to apply them with some probability p 1 such that the discriminator will eventually be able to discern the true underlying distribution. One challenge introduced with this technique is that different datasets require different amounts of augmentation depending on their size, and as such, expensive grid search is required for optimization. To eliminate the need for this search step an adaptive version called adaptive discriminator augmentation (ADA) is introduced.


Review for NeurIPS paper: Training Generative Adversarial Networks with Limited Data

Neural Information Processing Systems

All reviewers found this work interesting and addressing an important issue in GAN training. The authors did a great job in presenting their analyses and experiments. Please take the reviewers' comments into account in your next revision (particularly some presentation advices). The authors are encouraged to cite the following work for a similar "non-leaking" DA: https://arxiv.org/abs/2006.05338 We did not bring this out during discussion nor used this for or against the authors.)


Supplementary Materials - VIME: Extending the Success of Self-and Semi-supervised Learning to Tabular Domain

Neural Information Processing Systems

Self-supervised learning trains an encoder to extract informative representations on the unlabeled data. Semisupervised learning uses the trained encoder in learning a predictive model on both labeled and unlabeled data. Figure 3: The proposed data corruption procedure. In the experiment section of the main manuscript, we evaluate VIME and its benchmarks on 11 datasets (6 genomics, 2 clinical, and 3 public datasets). Here, we provide the basic data statistics for the 11 used datasets in Table 1.


Review for NeurIPS paper: VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain

Neural Information Processing Systems

Weaknesses: My central concern for this paper is the misalignment between the motivation and methodology. As motivation, the authors argue that self-supervised CV and **NLP** "algorithms are not effective for tabular data." The proposed model, though, is effectively the binary masked language model whose variants pervade self-supervised NLP research (e.g. Granted, instead of masking words, the proposed models are masking tabular values, but this is performing a very similar pretext task. In fact, there is concurrent work that learns tabular representations using a BERT model [1].


VIME: Extending the Success of Self-and Semi-supervised Learning to Tabular Domain

Neural Information Processing Systems

Self-and semi-supervised learning frameworks have made significant progress in training machine learning models with limited labeled data in image and language domains. These methods heavily rely on the unique structure in the domain datasets (such as spatial relationships in images or semantic relationships in language). They are not adaptable to general tabular data which does not have the same explicit structure as image and language data. In this paper, we fill this gap by proposing novel self-and semi-supervised learning frameworks for tabular data, which we refer to collectively as VIME (Value Imputation and Mask Estimation). We create a novel pretext task of estimating mask vectors from corrupted tabular data in addition to the reconstruction pretext task for self-supervised learning. We also introduce a novel tabular data augmentation method for self-and semi-supervised learning frameworks. In experiments, we evaluate the proposed framework in multiple tabular datasets from various application domains, such as genomics and clinical data. VIME exceeds state-of-the-art performance in comparison to the existing baseline methods.


Review for NeurIPS paper: VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain

Neural Information Processing Systems

This paper proposes a new reconstruction loss for unsupervised training of representations. This loss extends auto-encoders via a pretext task that uses the marginal distribution of features. The reviewers were unanimous in their decision to accept this paper.