blendgan
BlendGAN: ImplicitlyGANBlendingforArbitrary StylizedFaceGeneration SupplementaryMaterials
For the generator and the three discriminators, we use the FFHQ [2] and AAHQ datasets with 1024 1024 resolution. Hence, cooperating withGAN inversion methods, our framework is able to achieve arbitrary style transfer of a given face image. Wheni=0,allthelayersofthegenerator areinfluenced bythestylelatentcode. Result images of the directly concatenating method have similar face identities and head poses to their reference images, which means that this method leaks content information ofreference images to stylelatentcodes. However, for a reference image whose style is significantly different from that inAAHQ, ifdirectly feeding itinto BlendGAN, the style ofgenerated images maynotbesimilartothereference.
BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation
Generative Adversarial Networks (GANs) have made a dramatic leap in high-fidelity image synthesis and stylized face generation. Recently, a layer-swapping mechanism has been developed to improve the stylization performance. However, this method is incapable of fitting arbitrary styles in a single model and requires hundreds of style-consistent training images for each style. To address the above issues, we propose BlendGAN for arbitrary stylized face generation by leveraging a flexible blending strategy and a generic artistic dataset. Specifically, we first train a self-supervised style encoder on the generic artistic dataset to extract the representations of arbitrary styles. In addition, a weighted blending module (WBM) is proposed to blend face and style representations implicitly and control the arbitrary stylization effect. By doing so, BlendGAN can gracefully fit arbitrary styles in a unified model while avoiding case-by-case preparation of style-consistent training images. To this end, we also present a novel large-scale artistic face dataset AAHQ. Extensive experiments demonstrate that BlendGAN outperforms state-of-the-art methods in terms of visual quality and style diversity for both latent-guided and reference-guided stylized face synthesis.
BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation
Generative Adversarial Networks (GANs) have made a dramatic leap in high-fidelity image synthesis and stylized face generation. Recently, a layer-swapping mechanism has been developed to improve the stylization performance. However, this method is incapable of fitting arbitrary styles in a single model and requires hundreds of style-consistent training images for each style. To address the above issues, we propose BlendGAN for arbitrary stylized face generation by leveraging a flexible blending strategy and a generic artistic dataset. Specifically, we first train a self-supervised style encoder on the generic artistic dataset to extract the representations of arbitrary styles. In addition, a weighted blending module (WBM) is proposed to blend face and style representations implicitly and control the arbitrary stylization effect.