Wang, Yongwei
CcGAN: Continuous Conditional Generative Adversarial Networks for Image Generation
Ding, Xin, Wang, Yongwei, Xu, Zuheng, Welch, William J., Wang, Z. Jane
This work proposes the continuous conditional generative adversarial network (CcGAN), the first generative model for image generation conditional on continuous, scalar conditions (termed regression labels). Existing conditional GANs (cGANs) are mainly designed for categorical conditions (e.g., class labels); conditioning on regression labels is mathematically distinct and raises two fundamental problems: (P1) Since there may be very few (even zero) real images for some regression labels, minimizing existing empirical versions of cGAN losses (a.k.a. empirical cGAN losses) often fails in practice; (P2) Since regression labels are scalar and infinitely many, conventional label input methods are not applicable. The proposed CcGAN solves the above problems, respectively, by (S1) reformulating existing empirical cGAN losses to be appropriate for the continuous scenario; and (S2) proposing a naive label input (NLI) method and an improved label input (ILI) method to incorporate regression labels into the generator and the discriminator. The reformulation in (S1) leads to two novel empirical discriminator losses, termed the hard vicinal discriminator loss (HVDL) and the soft vicinal discriminator loss (SVDL) respectively, and a novel empirical generator loss. The error bounds of a discriminator trained with HVDL and SVDL are derived under mild assumptions in this work. Two new benchmark datasets (RC-49 and Cell-200) and a novel evaluation metric (Sliding Fr\'echet Inception Distance) are also proposed for this continuous scenario. Our experiments on the Circular 2-D Gaussians, RC-49, UTKFace, Cell-200, and Steering Angle datasets show that CcGAN can generate diverse, high-quality samples from the image distribution conditional on a given regression label. Moreover, in these experiments, CcGAN substantially outperforms cGAN both visually and quantitatively.
A Deep Learning Based Attack for The Chaos-based Image Encryption
He, Chen, Ming, Kan, Wang, Yongwei, Wang, Z. Jane
In this letter, as a proof of concept, we propose a deep learning-based approach to attack the chaos-based image encryption algorithm in \cite{guan2005chaos}. The proposed method first projects the chaos-based encrypted images into the low-dimensional feature space, where essential information of plain images has been largely preserved. With the low-dimensional features, a deconvolutional generator is utilized to regenerate perceptually similar decrypted images to approximate the plain images in the high-dimensional space. Compared with conventional image encryption attack algorithms, the proposed method does not require to manually analyze and infer keys in a time-consuming way. Instead, we directly attack the chaos-based encryption algorithms in a key-independent manner. Moreover, the proposed method can be trained end-to-end. Given the chaos-based encrypted images, a well-trained decryption model is able to automatically reconstruct plain images with high fidelity. In the experiments, we successfully attack the chaos-based algorithm \cite{guan2005chaos} and the decrypted images are visually similar to their ground truth plain images. Experimental results on both static-key and dynamic-key scenarios verify the efficacy of the proposed method.