Goto

Collaborating Authors

 Bui, Anh


Fantastic Targets for Concept Erasure in Diffusion Models and Where To Find Them

arXiv.org Artificial Intelligence

Concept erasure has emerged as a promising technique for mitigating the risk of harmful content generation in diffusion models by selectively unlearning undesirable concepts. The common principle of previous works to remove a specific concept is to map it to a fixed generic concept, such as a neutral concept or just an empty text prompt. In this paper, we demonstrate that this fixed-target strategy is suboptimal, as it fails to account for the impact of erasing one concept on the others. To address this limitation, we model the concept space as a graph and empirically analyze the effects of erasing one concept on the remaining concepts. Our analysis uncovers intriguing geometric properties of the concept space, where the influence of erasing a concept is confined to a local region. Building on this insight, we propose the Adaptive Guided Erasure (AGE) method, which \emph{dynamically} selects optimal target concepts tailored to each undesirable concept, minimizing unintended side effects. Experimental results show that AGE significantly outperforms state-of-the-art erasure methods on preserving unrelated concepts while maintaining effective erasure performance. Our code is published at {https://github.com/tuananhbui89/Adaptive-Guided-Erasure}.


Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation

arXiv.org Artificial Intelligence

Diffusion models excel at generating visually striking content from text but can inadvertently produce undesirable or harmful content when trained on unfiltered internet data. A practical solution is to selectively removing target concepts from the model, but this may impact the remaining concepts. Prior approaches have tried to balance this by introducing a loss term to preserve neutral content or a regularization term to minimize changes in the model parameters, yet resolving this trade-off remains challenging. In this work, we propose to identify and preserving concepts most affected by parameter changes, termed as \textit{adversarial concepts}. This approach ensures stable erasure with minimal impact on the other concepts. We demonstrate the effectiveness of our method using the Stable Diffusion model, showing that it outperforms state-of-the-art erasure methods in eliminating unwanted content while maintaining the integrity of other unrelated elements. Our code is available at \url{https://github.com/tuananhbui89/Erasing-Adversarial-Preservation}.


Diversity-Aware Agnostic Ensemble of Sharpness Minimizers

arXiv.org Machine Learning

There has long been plenty of theoretical and empirical evidence supporting the success of ensemble learning. Deep ensembles in particular take advantage of training randomness and expressivity of individual neural networks to gain prediction diversity, ultimately leading to better generalization, robustness and uncertainty estimation. In respect of generalization, it is found that pursuing wider local minima result in models being more robust to shifts between training and testing sets. A natural research question arises out of these two approaches as to whether a boost in generalization ability can be achieved if ensemble learning and loss sharpness minimization are integrated. Our work investigates this connection and proposes DASH - a learning algorithm that promotes diversity and flatness within deep ensembles. More concretely, DASH encourages base learners to move divergently towards low-loss regions of minimal sharpness. We provide a theoretical backbone for our method along with extensive empirical evidence demonstrating an improvement in ensemble generalizability.


Removing Undesirable Concepts in Text-to-Image Generative Models with Learnable Prompts

arXiv.org Artificial Intelligence

Generative models have demonstrated remarkable potential in generating visually impressive content from textual descriptions. However, training these models on unfiltered internet data poses the risk of learning and subsequently propagating undesirable concepts, such as copyrighted or unethical content. In this paper, we propose a novel method to remove undesirable concepts from text-to-image generative models by incorporating a learnable prompt into the cross-attention module. This learnable prompt acts as additional memory to transfer the knowledge of undesirable concepts into it and reduce the dependency of these concepts on the model parameters and corresponding textual inputs. Because of this knowledge transfer into the prompt, erasing these undesirable concepts is more stable and has minimal negative impact on other concepts. We demonstrate the effectiveness of our method on the Stable Diffusion model, showcasing its superiority over state-of-the-art erasure methods in terms of removing undesirable content while preserving other unrelated elements.


Robust Contrastive Learning With Theory Guarantee

arXiv.org Artificial Intelligence

Contrastive learning (CL) allows us to create meaningful features without any label information. In the first phase, CL approaches learn the features, which are then classified by a linear classifier that has been learned from labeled data. While existing theoretical works have studied the connection between the supervised loss in the second phase and the unsupervised loss in the first phase to explain why the unsupervised loss can support the supervised loss, there has been no theoretical examination of the connection between the unsupervised loss in the first phase and the robust supervised loss in the second phase, which can shed light on how to establish an effective unsupervised loss in the first phase. To fill this gap, our paper develops rigorous theories to identify which components in the supervised loss can aid the robust supervised loss. Finally, we conduct experiments to verify our findings. All code used in this work is available at https://anonymous.4open.science/r/rosa.


Generating Adversarial Examples with Task Oriented Multi-Objective Optimization

arXiv.org Artificial Intelligence

Deep learning models, even the-state-of-the-art ones, are highly vulnerable to adversarial examples. Adversarial training is one of the most efficient methods to improve the model's robustness. The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e.g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models). Therefore, multi-objective optimization (MOO) is a natural tool for adversarial example generation to achieve multiple objectives/goals simultaneously. However, we observe that a naive application of MOO tends to maximize all objectives/goals equally, without caring if an objective/goal has been achieved yet. This leads to useless effort to further improve the goal-achieved tasks, while putting less focus on the goal-unachieved tasks. In this paper, we propose \emph{Task Oriented MOO} to address this issue, in the context where we can explicitly define the goal achievement for a task. Our principle is to only maintain the goal-achieved tasks, while letting the optimizer spend more effort on improving the goal-unachieved tasks. We conduct comprehensive experiments for our Task Oriented MOO on various adversarial example generation schemes. The experimental results firmly demonstrate the merit of our proposed approach. Our code is available at \url{https://github.com/tuananhbui89/TAMOO}.


Understanding and Achieving Efficient Robustness with Adversarial Contrastive Learning

arXiv.org Artificial Intelligence

Among them, the adversarial training methods (e.g, FGSM, PGD adversarial training [13, 22] and Contrastive learning (CL) has recently emerged as an TRADES [36] that utilize adversarial examples as training effective approach to learning representation in a range of data, have been one of the most effective approaches, which downstream tasks. Central to this approach is the selection truly boost the model robustness without the facing the of positive (similar) and negative (dissimilar) sets to provide problem of obfuscated gradients [3]. In adversarial training, the model the opportunity to'contrast' between data recent works [34, 4] show that reducing the divergence and class representation in the latent space. In this paper, of the representations of images and their adversarial examples we investigate CL for improving model robustness using adversarial in latent space (e.g., the feature space output from an samples. We first designed and performed a comprehensive intermediate layer of a classifier) can significantly improve study to understand how adversarial vulnerability the robustness. For example, in [4], latent representations behaves in the latent space. Based on these empirical of images in the same class are pulled closer together than evidences, we propose an effective and efficient supervised those in different classes, which led to a more compact latent contrastive learning to achieve model robustness against space and consequently, better robustness.


Improving Adversarial Robustness by Enforcing Local and Global Compactness

arXiv.org Machine Learning

The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application. Among many developed defense models against such attacks, adversarial training emerges as the most successful method that consistently resists a wide range of attacks. In this work, based on an observation from a previous study that the representations of a clean data example and its adversarial examples become more divergent in higher layers of a deep neural net, we propose the Adversary Divergence Reduction Network which enforces local/global compactness and the clustering assumption over an intermediate layer of a deep neural network. We conduct comprehensive experiments to understand the isolating behavior of each component (i.e., local/global compactness and the clustering assumption) and compare our proposed model with state-of-the-art adversarial training methods. The experimental results demonstrate that augmenting adversarial training with our proposed components can further improve the robustness of the network, leading to higher unperturbed and adversarial predictive performances.