Goto

Collaborating Authors

 generator and discriminator



RevisitingDiscriminatorinGANCompression: AGenerator-discriminatorCooperativeCompression Scheme

Neural Information Processing Systems

As shown in Figure 1(b) and 1(c), when compressing the generator, the loss of the discriminator gradually tends to zero, such asituation indicates that the capacity of the discriminator significantly surpasses that of the lightweight generator.


Meta Internal Learning: Supplementary material Raphael Bensadoun

Neural Information Processing Systems

Next, we would like to prove the opposite direction. All LeakyReLU activations have a slope of 0.02 for negative values except when we use a classic discriminator for single image training, for which we use a slope of 0.2. Additionally, the generator's last conv-block activation at each scale is Tanh instead of ReLU and the discriminator's last We clip the gradient s.t it has a maximal L2 norm of 1 for both the generators and Batch sizes of 16 were used for all experiments involving a dataset of images. At test time, the GPU memory usage is significantly reduced and requires 5GB. In this section, we consider training our method with a "frozen" pretrained ResNet34 i.e., optimizing If the problem could be learned with a "small enough" depth, our method would benefit from even As can be seen, our method yields realistic results with any batch size.




SCOP: Scientific Control for Reliable Neural Network Pruning (Supplementary Material) Y ehui T ang 1,2, Yunhe Wang

Neural Information Processing Systems

Through standard Schur complement calculation, the semi-definite condition can be derived, i.e., The knockoff data are generated by the generator and then sent to the discriminator to verify whether the knockoff condition (Definition 1) holds. The distribution of features w.r .t. samples are shown in Figure S1, and 10K samples are sampled from ImagNet dataset.


Two-flow Feedback Multi-scale Progressive Generative Adversarial Network

Weikai, Sun, Shijie, Song, Wenjie, Chi

arXiv.org Artificial Intelligence

Although diffusion model has made good progress in the field of image generation, GAN\cite{huang2023adaptive} still has a large development space due to its unique advantages, such as WGAN\cite{liu2021comparing}, SSGAN\cite{guibas2021adaptive} \cite{zhang2022vsa} \cite{zhou2024adapt} and so on. In this paper, we propose a novel two-flow feedback multi-scale progressive generative adversarial network (MSPG-SEN) for GAN models. This paper has four contributions: 1) : We propose a two-flow feedback multi-scale progressive Generative Adversarial network (MSPG-SEN), which not only improves image quality and human visual perception on the basis of retaining the advantages of the existing GAN model, but also simplifies the training process and reduces the training cost of GAN networks. Our experimental results show that, MSPG-SEN has achieved state-of-the-art generation results on the following five datasets,INKK The dataset is 89.7\%,AWUN The dataset is 78.3\%,IONJ The dataset is 85.5\%,POKL The dataset is 88.7\%,OPIN The dataset is 96.4\%. 2) : We propose an adaptive perception-behavioral feedback loop (APFL), which effectively improves the robustness and training stability of the model and reduces the training cost. 3) : We propose a globally connected two-flow dynamic residual network(). After ablation experiments, it can effectively improve the training efficiency and greatly improve the generalization ability, with stronger flexibility. 4) : We propose a new dynamic embedded attention mechanism (DEMA). After experiments, the attention can be extended to a variety of image processing tasks, which can effectively capture global-local information, improve feature separation capability and feature expression capabilities, and requires minimal computing resources only 88.7\% with INJK With strong cross-task capability.


unclear points and will update the paper accordingly in the final version. 2 To Reviewer # 1. 1. Architectures for generators and discriminators. We adopt the generator and discriminator

Neural Information Processing Systems

We sincerely thank all the reviewers for their insightful comments to help us improve the paper. T o Reviewer #2. 1. Are multiple sources more beneficial? This is largely due to the fact that domain gap also exists among different source domains. We will reorganize the layout of Figure 1 in the main paper to make it more clear. We thank the reviewer for pointing this out.



Meta Internal Learning: Supplementary material Raphael Bensadoun

Neural Information Processing Systems

Next, we would like to prove the opposite direction. All LeakyReLU activations have a slope of 0.02 for negative values except when we use a classic discriminator for single image training, for which we use a slope of 0.2. Additionally, the generator's last conv-block activation at each scale is Tanh instead of ReLU and the discriminator's last We clip the gradient s.t it has a maximal L2 norm of 1 for both the generators and Batch sizes of 16 were used for all experiments involving a dataset of images. At test time, the GPU memory usage is significantly reduced and requires 5GB. In this section, we consider training our method with a "frozen" pretrained ResNet34 i.e., optimizing If the problem could be learned with a "small enough" depth, our method would benefit from even As can be seen, our method yields realistic results with any batch size.