Goto

Collaborating Authors

 melgan



6804c9bca0a615bdb9374d00a9fcba59-AuthorFeedback.pdf

Neural Information Processing Systems

We believe that these important contributions warrant publication in the6 conference. State-of-the-art claims for text-to-speech: Furthermore, we will remove the state-of-the-art TTS claim made in17 line 87 in the final version. The MOS of ground truth audio in this dataset is 4.72. R1: claiming "autoregressive models can be readily replaced with MelGAN decoder" (line 89, line 228) without32 For the sake of brevity, the results are as follows:44 Original(4.19 0.083),MelGAN(3.49 Yes, the exact same hardware and computing specifications were used tocompare all the52 models.



MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis

Neural Information Processing Systems

Previous works (Donahue et al., 2018a; Engel et al., 2019a) have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Subjective evaluation metric (Mean Opinion Score, or MOS) shows the effectiveness of the proposed approach for high quality mel-spectrogram inversion. To establish the generality of the proposed techniques, we show qualitative results of our model in speech synthesis, music domain translation and unconditional music synthesis. We evaluate the various components of the model through ablation studies and suggest a set of guidelines to design general purpose discriminators and generators for conditional sequence synthesis tasks. Our model is non-autoregressive, fully convolutional, with significantly fewer parameters than competing models and generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU and more than 2x faster than real-time on CPU, without any hardware specific optimization tricks.



We thank all the reviewers for their valuable comments

Neural Information Processing Systems

We thank all the reviewers for their valuable comments. We would like to clarify that, 'When the model was trained without the mel-spectrogram loss, the training process We also think that applying the L1/L2 loss gives no disadvantage in one-to-one mapping as our work. We will clarify the details of the experiments in Section 3. Table 1: Mean Opinion Scores. All models were trained up to 500k steps. MOS evaluation results are shown in [Table 1].


Reviews: MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis

Neural Information Processing Systems

Quality: This paper suffers from a few critical issues. Clarity: The experiment setting ups can be described with more details. Sec 3.2 and 3.4 is missing important information such as the datasets used for conducting the experiments. Significance: Although the quality of the proposed model remains unclear because of the previously mentioned critical issues, it's a significant work because it's the first GAN-based model for spectrogram-to-waveform conversion which seems to be working at some degree. It's significantly over-claimed: 1) claiming state-of-the-art for spectrogram-to-waveform conversion (line 6) with MOS 3.09 is surprising; many previous works are at a much higher level (e.g.


Reviews: MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis

Neural Information Processing Systems

The paper describes a successful approach for non-autoregressive spectrogram inversion based on Generative Adversarial Networks. The reviewers noted that even though the results are not at the level of state-of-the-art, the paper addresses a difficult and timely problem, with a convincing experimental validation and ablation study. The rebuttal addressed the main concerns of the reviewers; the authors should nonetheless make sure to address other concerns in the camera-ready version.


MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis

Neural Information Processing Systems

Previous works (Donahue et al., 2018a; Engel et al., 2019a) have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Subjective evaluation metric (Mean Opinion Score, or MOS) shows the effectiveness of the proposed approach for high quality mel-spectrogram inversion. To establish the generality of the proposed techniques, we show qualitative results of our model in speech synthesis, music domain translation and unconditional music synthesis. We evaluate the various components of the model through ablation studies and suggest a set of guidelines to design general purpose discriminators and generators for conditional sequence synthesis tasks.


Towards generalizing deep-audio fake detection networks

Gasenzer, Konstantin, Wolter, Moritz

arXiv.org Artificial Intelligence

Today's generative neural networks allow the creation of high-quality synthetic speech at scale. While we welcome the creative use of this new technology, we must also recognize the risks. As synthetic speech is abused for monetary and identity theft, we require a broad set of deepfake identification tools. Furthermore, previous work reported a limited ability of deep classifiers to generalize to unseen audio generators. We study the frequency domain fingerprints of current audio generators. Building on top of the discovered frequency footprints, we train excellent lightweight detectors that generalize. We report improved results on the WaveFake dataset and an extended version. To account for the rapid progress in the field, we extend the WaveFake dataset by additionally considering samples drawn from the novel Avocodo and BigVGAN networks.