cam-gan
Supplementary: CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks
Our approach leverages feature space for style modulation to adapt to the novel task. We train our model on various datasets to show the effectiveness of our approach in generating high-dimensional and diverse domains images in a streamed manner. Due to limited space, we could only demonstrate part of the generated images in the main paper(Sec.2.3). We inherit GAN architecture from "Which Training Methods for GANs do actually Converge?"(GP-GAN) We select GP-GAN architecture as it has been very successful in generating quality samples in high-dimensional spaces, by providing stable training.
CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks
We present a continual learning approach for generative adversarial networks (GANs), by designing and leveraging parameter-efficient feature map transformations. Our approach is based on learning a set of global and task-specific parameters. The global parameters are fixed across tasks whereas the task-specific parameters act as local adapters for each task, and help in efficiently obtaining task-specific feature maps. Moreover, we propose an element-wise addition of residual bias in the transformed feature space, which further helps stabilize GAN training in such settings. Our approach also leverages task similarities based on the Fisher information matrix.