Goto

Collaborating Authors

 biggan




Instance-Conditioned GAN

Neural Information Processing Systems

Finally, we extend IC-GAN to the class-conditional case and show semantically controllable generation and competitive quantitative results on ImageNet; while improving over BigGAN on ImageNet-L T. Code and trained models to reproduce the reported results are


A Detailed Description of Evaluation Metrics

Neural Information Processing Systems

We use a variety of evaluation metrics to diagnose the effect that training with instance selection has on the learned distribution. In all cases where a reference distribution is required we use the original training distribution, and not the distribution produced after instance selection. The Inception Score is maximized when a model produces highly recognizable outputs for each of the ImageNet classes. In Table 6 we include numerical results for the retention ratio experiments conducted in 4.4. The base models (threshold = 1) are marked with a .


Overview of the Appendix 556 The Appendix is organized as follows: 557 Appendix A introduces the general experimental setup

Neural Information Processing Systems

Appendix A introduces the general experimental setup. Appendix B introduces the details of dynamic sparse training. Appendix C shows detailed algorithms, i.e., DDA, ADAPT Appendix D shows the BR evolution during training for ADAPT. Appendix E shows additional results, including IS and FID of test sets of the main paper. Appendix F shows detailed FLOPs comparisons of sparse training methods.





47d40767c7e9df50249ebfd9c7cfff77-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their valuable comments! Unclear if the proposed method is better than only using LSH. Thank you for the suggestions. ALSH significantly outperforms the E2LSH and the Reformer LSH scheme. SMYRF-BERT base (see also Table 2).