Ding, Shouhong
Second FRCSyn-onGoing: Winning Solutions and Post-Challenge Analysis to Improve Face Recognition with Synthetic Data
DeAndres-Tame, Ivan, Tolosana, Ruben, Melzi, Pietro, Vera-Rodriguez, Ruben, Kim, Minchul, Rathgeb, Christian, Liu, Xiaoming, Gomez, Luis F., Morales, Aythami, Fierrez, Julian, Ortega-Garcia, Javier, Zhong, Zhizhou, Huang, Yuge, Mi, Yuxi, Ding, Shouhong, Zhou, Shuigeng, He, Shuai, Fu, Lingzhi, Cong, Heng, Zhang, Rongyu, Xiao, Zhihong, Smirnov, Evgeny, Pimenov, Anton, Grigorev, Aleksei, Timoshenko, Denis, Asfaw, Kaleb Mesfin, Low, Cheng Yaw, Liu, Hao, Wang, Chuyi, Zuo, Qing, He, Zhixiang, Shahreza, Hatef Otroshi, George, Anjith, Unnervik, Alexander, Rahimi, Parsa, Marcel, Sébastien, Neto, Pedro C., Huber, Marco, Kolf, Jan Niklas, Damer, Naser, Boutros, Fadi, Cardoso, Jaime S., Sequeira, Ana F., Atzori, Andrea, Fenu, Gianni, Marras, Mirko, Štruc, Vitomir, Yu, Jiang, Li, Zhangjie, Li, Jichun, Zhao, Weisong, Lei, Zhen, Zhu, Xiangyu, Zhang, Xiao-Yu, Biesseck, Bernardo, Vidal, Pedro, Coelho, Luiz, Granada, Roger, Menotti, David
Synthetic data is gaining increasing popularity for face recognition technologies, mainly due to the privacy concerns and challenges associated with obtaining real data, including diverse scenarios, quality, and demographic groups, among others. It also offers some advantages over real data, such as the large amount of data that can be generated or the ability to customize it to adapt to specific problem-solving needs. To effectively use such data, face recognition models should also be specifically designed to exploit synthetic data to its fullest potential. In order to promote the proposal of novel Generative AI methods and synthetic data, and investigate the application of synthetic data to better train face recognition systems, we introduce the 2nd FRCSyn-onGoing challenge, based on the 2nd Face Recognition Challenge in the Era of Synthetic Data (FRCSyn), originally launched at CVPR 2024. This is an ongoing challenge that provides researchers with an accessible platform to benchmark i) the proposal of novel Generative AI methods and synthetic data, and ii) novel face recognition systems that are specifically proposed to take advantage of synthetic data. We focus on exploring the use of synthetic data both individually and in combination with real data to solve current challenges in face recognition such as demographic bias, domain adaptation, and performance constraints in demanding situations, such as age disparities between training and testing, changes in the pose, or occlusions. Very interesting findings are obtained in this second edition, including a direct comparison with the first one, in which synthetic databases were restricted to DCFace and GANDiffFace.
A Quality-Centric Framework for Generic Deepfake Detection
Song, Wentang, Yan, Zhiyuan, Lin, Yuzhen, Yao, Taiping, Chen, Changsheng, Chen, Shen, Zhao, Yandan, Ding, Shouhong, Li, Bin
This paper addresses the generalization issue in deepfake detection by harnessing forgery quality in training data. Generally, the forgery quality of different deepfakes varies: some have easily recognizable forgery clues, while others are highly realistic. Existing works often train detectors on a mix of deepfakes with varying forgery qualities, potentially leading detectors to short-cut the easy-to-spot artifacts from low-quality forgery samples, thereby hurting generalization performance. To tackle this issue, we propose a novel quality-centric framework for generic deepfake detection, which is composed of a Quality Evaluator, a low-quality data enhancement module, and a learning pacing strategy that explicitly incorporates forgery quality into the training process. The framework is inspired by curriculum learning, which is designed to gradually enable the detector to learn more challenging deepfake samples, starting with easier samples and progressing to more realistic ones. We employ both static and dynamic assessments to assess the forgery quality, combining their scores to produce a final rating for each training sample. The rating score guides the selection of deepfake samples for training, with higher-rated samples having a higher probability of being chosen. Furthermore, we propose a novel frequency data augmentation method specifically designed for low-quality forgery samples, which helps to reduce obvious forgery traces and improve their overall realism. Extensive experiments show that our method can be applied in a plug-and-play manner and significantly enhance the generalization performance.
SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models
Zhao, Linglan, Zhang, Xuerui, Yan, Ke, Ding, Shouhong, Huang, Weiran
Continual learning aims to incrementally acquire new concepts in data streams while resisting forgetting previous knowledge. With the rise of powerful pre-trained models (PTMs), there is a growing interest in training incremental learning systems using these foundation models, rather than learning from scratch. Existing works often view PTMs as a strong initial point and directly apply parameter-efficient tuning (PET) in the first session for adapting to downstream tasks. In the following sessions, most methods freeze model parameters for tackling forgetting issues. However, applying PET directly to downstream data cannot fully explore the inherent knowledge in PTMs. Additionally, freezing the parameters in incremental sessions hinders models' plasticity to novel concepts not covered in the first session. To solve the above issues, we propose a Slow And Fast parameter-Efficient tuning (SAFE) framework. In particular, to inherit general knowledge from foundation models, we include a transfer loss function by measuring the correlation between the PTM and the PET-applied model. After calibrating in the first session, the slow efficient tuning parameters can capture more informative features, improving generalization to incoming classes. Moreover, to further incorporate novel concepts, we strike a balance between stability and plasticity by fixing slow efficient tuning parameters and continuously updating the fast ones. Specifically, a cross-classification loss with feature alignment is proposed to circumvent catastrophic forgetting. During inference, we introduce an entropy-based aggregation strategy to dynamically utilize the complementarity in the slow and fast learners. Extensive experiments on seven benchmark datasets verify the effectiveness of our method by significantly surpassing the state-of-the-art.
Decoupled Data Augmentation for Improving Image Classification
Chen, Ruoxin, Wang, Zhe, Zhang, Ke-Yue, Wu, Shuang, Sun, Jiamu, Wang, Shouli, Yao, Taiping, Ding, Shouhong
Recent advancements in image mixing and generative data augmentation have shown promise in enhancing image classification. However, these techniques face the challenge of balancing semantic fidelity with diversity. Specifically, image mixing involves interpolating two images to create a new one, but this pixel-level interpolation can compromise fidelity. Generative augmentation uses text-to-image generative models to synthesize or modify images, often limiting diversity to avoid generating out-of-distribution data that potentially affects accuracy. We propose that this fidelity-diversity dilemma partially stems from the whole-image paradigm of existing methods. Since an image comprises the class-dependent part (CDP) and the class-independent part (CIP), where each part has fundamentally different impacts on the image's fidelity, treating different parts uniformly can therefore be misleading. To address this fidelity-diversity dilemma, we introduce Decoupled Data Augmentation (De-DA), which resolves the dilemma by separating images into CDPs and CIPs and handling them adaptively. To maintain fidelity, we use generative models to modify real CDPs under controlled conditions, preserving semantic consistency. To enhance diversity, we replace the image's CIP with inter-class variants, creating diverse CDP-CIP combinations. Additionally, we implement an online randomized combination strategy during training to generate numerous distinct CDP-CIP combinations cost-effectively. Comprehensive empirical evaluations validate the effectiveness of our method.
TAS: Distilling Arbitrary Teacher and Student via a Hybrid Assistant
Li, Guopeng, Wang, Qiang, Yan, Ke, Ding, Shouhong, Gao, Yuan, Xia, Gui-Song
Most knowledge distillation (KD) methodologies predominantly focus on teacherstudent pairs with similar architectures, such as both being convolutional neural networks (CNNs). However, the potential and flexibility of KD can be greatly improved by expanding it to novel Cross-Architecture KD (CAKD), where the knowledge of homogeneous and heterogeneous teachers can be transferred flexibly to a given student. The primary challenge in CAKD lies in the substantial feature gaps between heterogeneous models, originating from the distinction of their inherent inductive biases and module functions. To this end, we introduce an assistant model as a bridge to facilitate smooth feature knowledge transfer between heterogeneous teachers and students. More importantly, within our proposed design principle, the assistant model combines the advantages of cross-architecture inductive biases and module functions by merging convolution and attention modules derived from both student and teacher module functions. Furthermore, we observe that heterogeneous features exhibit diverse spatial distributions in CAKD, hindering the effectiveness of conventional pixel-wise mean squared error (MSE) loss. Therefore, we leverage a spatial-agnostic InfoNCE loss to align features after spatial smoothing, thereby improving the feature alignments in CAKD. Our proposed method is evaluated across some homogeneous model pairs and arbitrary heterogeneous combinations of CNNs, ViTs, and MLPs, achieving state-of-the-art performance for distilled models with a maximum gain of 11.47% on CIFAR-100 and 3.67% on ImageNet-1K. Our code and models will be released. Knowledge Distillation (KD) (Hinton et al., 2015; Romero et al., 2015) has been demonstrated as a powerful method to transfer knowledge from a pre-trained and cumbersome teacher model to a compact and efficient student model. Compared to the model trained from scratch, the performance of the student model distilled by appropriate teachers usually improves significantly.
LLaCA: Multimodal Large Language Continual Assistant
Qiao, Jingyang, Zhang, Zhizhong, Tan, Xin, Qu, Yanyun, Ding, Shouhong, Xie, Yuan
Instruction tuning guides the Multimodal Large Language Models (MLLMs) in aligning different modalities by designing text instructions, which seems to be an essential technique to enhance the capabilities and controllability of foundation models. In this framework, Multimodal Continual Instruction Tuning (MCIT) is adopted to continually instruct MLLMs to follow human intent in sequential datasets. We observe existing gradient update would heavily destroy the tuning performance on previous datasets and the zero-shot ability during continual instruction tuning. Exponential Moving Average (EMA) update policy owns the ability to trace previous parameters, which can aid in decreasing forgetting. However, its stable balance weight cannot deal with the ever-changing datasets, leading to the out-of-balance between plasticity and stability of MLLMs. In this paper, we propose a method called Multimodal Large Language Continual Assistant (LLaCA) to address the challenge. Starting from the trade-off prerequisite and EMA update, we propose the plasticity and stability ideal condition. Based on Taylor expansion in the loss function, we find the optimal balance weight is basically according to the gradient information and previous parameters. We automatically determine the balance weight and significantly improve the performance. Through comprehensive experiments on LLaVA-1.5 in a continual visual-question-answering benchmark, compared with baseline, our approach not only highly improves anti-forgetting ability (with reducing forgetting from 22.67 to 2.68), but also significantly promotes continual tuning performance (with increasing average accuracy from 41.31 to 61.89). Our code will be published soon.
Second Edition FRCSyn Challenge at CVPR 2024: Face Recognition Challenge in the Era of Synthetic Data
DeAndres-Tame, Ivan, Tolosana, Ruben, Melzi, Pietro, Vera-Rodriguez, Ruben, Kim, Minchul, Rathgeb, Christian, Liu, Xiaoming, Morales, Aythami, Fierrez, Julian, Ortega-Garcia, Javier, Zhong, Zhizhou, Huang, Yuge, Mi, Yuxi, Ding, Shouhong, Zhou, Shuigeng, He, Shuai, Fu, Lingzhi, Cong, Heng, Zhang, Rongyu, Xiao, Zhihong, Smirnov, Evgeny, Pimenov, Anton, Grigorev, Aleksei, Timoshenko, Denis, Asfaw, Kaleb Mesfin, Low, Cheng Yaw, Liu, Hao, Wang, Chuyi, Zuo, Qing, He, Zhixiang, Shahreza, Hatef Otroshi, George, Anjith, Unnervik, Alexander, Rahimi, Parsa, Marcel, Sébastien, Neto, Pedro C., Huber, Marco, Kolf, Jan Niklas, Damer, Naser, Boutros, Fadi, Cardoso, Jaime S., Sequeira, Ana F., Atzori, Andrea, Fenu, Gianni, Marras, Mirko, Štruc, Vitomir, Yu, Jiang, Li, Zhangjie, Li, Jichun, Zhao, Weisong, Lei, Zhen, Zhu, Xiangyu, Zhang, Xiao-Yu, Biesseck, Bernardo, Vidal, Pedro, Coelho, Luiz, Granada, Roger, Menotti, David
Synthetic data is gaining increasing relevance for training machine learning models. This is mainly motivated due to several factors such as the lack of real data and intra-class variability, time and errors produced in manual labeling, and in some cases privacy concerns, among others. This paper presents an overview of the 2nd edition of the Face Recognition Challenge in the Era of Synthetic Data (FRCSyn) organized at CVPR 2024. FRCSyn aims to investigate the use of synthetic data in face recognition to address current technological limitations, including data privacy concerns, demographic biases, generalization to novel scenarios, and performance constraints in challenging situations such as aging, pose variations, and occlusions. Unlike the 1st edition, in which synthetic data from DCFace and GANDiffFace methods was only allowed to train face recognition systems, in this 2nd edition we propose new sub-tasks that allow participants to explore novel face generative methods. The outcomes of the 2nd FRCSyn Challenge, along with the proposed experimental protocol and benchmarking contribute significantly to the application of synthetic data to face recognition.
LaRE^2: Latent Reconstruction Error Based Method for Diffusion-Generated Image Detection
Luo, Yunpeng, Du, Junlong, Yan, Ke, Ding, Shouhong
The evolution of Diffusion Models has dramatically improved image generation quality, making it increasingly difficult to differentiate between real and generated images. This development, while impressive, also raises significant privacy and security concerns. In response to this, we propose a novel Latent REconstruction error guided feature REfinement method (LaRE^2) for detecting the diffusion-generated images. We come up with the Latent Reconstruction Error (LaRE), the first reconstruction-error based feature in the latent space for generated image detection. LaRE surpasses existing methods in terms of feature extraction efficiency while preserving crucial cues required to differentiate between the real and the fake. To exploit LaRE, we propose an Error-Guided feature REfinement module (EGRE), which can refine the image feature guided by LaRE to enhance the discriminativeness of the feature. Our EGRE utilizes an align-then-refine mechanism, which effectively refines the image feature for generated-image detection from both spatial and channel perspectives. Extensive experiments on the large-scale GenImage benchmark demonstrate the superiority of our LaRE^2, which surpasses the best SoTA method by up to 11.9%/12.1% average ACC/AP across 8 different image generators. LaRE also surpasses existing methods in terms of feature extraction cost, delivering an impressive speed enhancement of 8 times.
Content-based Unrestricted Adversarial Attack
Chen, Zhaoyu, Li, Bo, Wu, Shuang, Jiang, Kaixun, Ding, Shouhong, Zhang, Wenqiang
Unrestricted adversarial attacks typically manipulate the semantic content of an image (e.g., color or texture) to create adversarial examples that are both effective and photorealistic, demonstrating their ability to deceive human perception and deep neural networks with stealth and success. However, current works usually sacrifice unrestricted degrees and subjectively select some image content to guarantee the photorealism of unrestricted adversarial examples, which limits its attack performance. To ensure the photorealism of adversarial examples and boost attack performance, we propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack. By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction. Therefore, within this framework, we implement Adversarial Content Attack (ACA) based on Stable Diffusion and can generate high transferable unrestricted adversarial examples with various adversarial contents. Extensive experimentation and visualization demonstrate the efficacy of ACA, particularly in surpassing state-of-the-art attacks by an average of 13.3-50.4% and 16.8-48.0%
Combining Past, Present and Future: A Self-Supervised Approach for Class Incremental Learning
Chen, Xiaoshuang, Sun, Zhongyi, Yan, Ke, Ding, Shouhong, Lu, Hongtao
Class Incremental Learning (CIL) aims to handle the scenario where data of novel classes occur continuously and sequentially. The model should recognize the sequential novel classes while alleviating the catastrophic forgetting. In the self-supervised manner, it becomes more challenging to avoid the conflict between the feature embedding spaces of novel classes and old ones without any class labels. To address the problem, we propose a self-supervised CIL framework CPPF, meaning Combining Past, Present and Future. In detail, CPPF consists of a prototype clustering module (PC), an embedding space reserving module (ESR) and a multi-teacher distillation module (MTD). 1) The PC and the ESR modules reserve embedding space for subsequent phases at the prototype level and the feature level respectively to prepare for knowledge learned in the future. 2) The MTD module maintains the representations of the current phase without the interference of past knowledge. One of the teacher networks retains the representations of the past phases, and the other teacher network distills relation information of the current phase to the student network. Extensive experiments on CIFAR100 and ImageNet100 datasets demonstrate that our proposed method boosts the performance of self-supervised class incremental learning. We will release code in the near future.