Yu, Chia-Mu
VP-NTK: Exploring the Benefits of Visual Prompting in Differentially Private Data Synthesis
Hsu, Chia-Yi, Chen, Jia-You, Tsai, Yu-Lin, Lin, Chih-Hsun, Chen, Pin-Yu, Yu, Chia-Mu, Huang, Chun-Ying
Differentially private (DP) synthetic data has become the de facto standard for releasing sensitive data. However, many DP generative models suffer from the low utility of synthetic data, especially for high-resolution images. On the other hand, one of the emerging techniques in parameter efficient fine-tuning (PEFT) is visual prompting (VP), which allows well-trained existing models to be reused for the purpose of adapting to subsequent downstream tasks. In this work, we explore such a phenomenon in constructing captivating generative models with DP constraints. We show that VP in conjunction with DP-NTK, a DP generator that exploits the power of the neural tangent kernel (NTK) in training DP generative models, achieves a significant performance boost, particularly for high-resolution image datasets, with accuracy improving from 0.644$\pm$0.044 to 0.769. Lastly, we perform ablation studies on the effect of different parameters that influence the overall performance of VP-NTK. Our work demonstrates a promising step forward in improving the utility of DP synthetic data, particularly for high-resolution images.
Data Poisoning Attacks to Locally Differentially Private Range Query Protocols
Liao, Ting-Wei, Lin, Chih-Hsun, Tsai, Yu-Lin, Murakami, Takao, Yu, Chia-Mu, Sakuma, Jun, Huang, Chun-Ying, Kikuchi, Hiroaki
Local Differential Privacy (LDP) has been widely adopted to protect user privacy in decentralized data collection. However, recent studies have revealed that LDP protocols are vulnerable to data poisoning attacks, where malicious users manipulate their reported data to distort aggregated results. In this work, we present the first study on data poisoning attacks targeting LDP range query protocols, focusing on both tree-based and grid-based approaches. We identify three key challenges in executing such attacks, including crafting consistent and effective fake data, maintaining data consistency across levels or grids, and preventing server detection. To address the first two challenges, we propose novel attack methods that are provably optimal, including a tree-based attack and a grid-based attack, designed to manipulate range query results with high effectiveness. \textbf{Our key finding is that the common post-processing procedure, Norm-Sub, in LDP range query protocols can help the attacker massively amplify their attack effectiveness.} In addition, we study a potential countermeasure, but also propose an adaptive attack capable of evading this defense to address the third challenge. We evaluate our methods through theoretical analysis and extensive experiments on synthetic and real-world datasets. Our results show that the proposed attacks can significantly amplify estimations for arbitrary range queries by manipulating a small fraction of users, providing 5-10x more influence than a normal user to the estimation.
Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data
Hsu, I-Jung, Lin, Chih-Hsun, Yu, Chia-Mu, Kuo, Sy-Yen, Huang, Chun-Ying
Trajectory data, which tracks movements through geographic locations, is crucial for improving real-world applications. However, collecting such sensitive data raises considerable privacy concerns. Local differential privacy (LDP) offers a solution by allowing individuals to locally perturb their trajectory data before sharing it. Despite its privacy benefits, LDP protocols are vulnerable to data poisoning attacks, where attackers inject fake data to manipulate aggregated results. In this work, we make the first attempt to analyze vulnerabilities in several representative LDP trajectory protocols. We propose \textsc{TraP}, a heuristic algorithm for data \underline{P}oisoning attacks using a prefix-suffix method to optimize fake \underline{Tra}jectory selection, significantly reducing computational complexity. Our experimental results demonstrate that our attack can substantially increase target pattern occurrences in the perturbed trajectory dataset with few fake users. This study underscores the urgent need for robust defenses and better protocol designs to safeguard LDP trajectory data against malicious manipulation.
Beyond Natural Language Perplexity: Detecting Dead Code Poisoning in Code Generation Datasets
Tsai, Chi-Chien, Yu, Chia-Mu, Lin, Ying-Dar, Wu, Yu-Sung, Lee, Wei-Bin
The increasing adoption of large language models (LLMs) for code-related tasks has raised concerns about the security of their training datasets. One critical threat is dead code poisoning, where syntactically valid but functionally redundant code is injected into training data to manipulate model behavior. Such attacks can degrade the performance of neural code search systems, leading to biased or insecure code suggestions. Existing detection methods, such as token-level perplexity analysis, fail to effectively identify dead code due to the structural and contextual characteristics of programming languages. In this paper, we propose DePA (Dead Code Perplexity Analysis), a novel line-level detection and cleansing method tailored to the structural properties of code. DePA computes line-level perplexity by leveraging the contextual relationships between code lines and identifies anomalous lines by comparing their perplexity to the overall distribution within the file. Our experiments on benchmark datasets demonstrate that DePA significantly outperforms existing methods, achieving 0.14-0.19 improvement in detection F1-score and a 44-65% increase in poisoned segment localization precision. Furthermore, DePA enhances detection speed by 0.62-23x, making it practical for large-scale dataset cleansing. Overall, by addressing the unique challenges of dead code poisoning, DePA provides a robust and efficient solution for safeguarding the integrity of code generation model training datasets.
Layer-Aware Task Arithmetic: Disentangling Task-Specific and Instruction-Following Knowledge
Chen, Yan-Lun, Wei, Yi-Ru, Hsu, Chia-Yi, Yu, Chia-Mu, Huang, Chun-Ying, Lin, Ying-Dar, Wu, Yu-Sung, Lee, Wei-Bin
Large language models (LLMs) demonstrate strong task-specific capabilities through fine-tuning, but merging multiple fine-tuned models often leads to degraded performance due to overlapping instruction-following components. Task Arithmetic (TA), which combines task vectors derived from fine-tuning, enables multi-task learning and task forgetting but struggles to isolate task-specific knowledge from general instruction-following behavior. To address this, we propose Layer-Aware Task Arithmetic (LATA), a novel approach that assigns layer-specific weights to task vectors based on their alignment with instruction-following or task-specific components. By amplifying task-relevant layers and attenuating instruction-following layers, LATA improves task learning and forgetting performance while preserving overall model utility. Experiments on multiple benchmarks, including WikiText-2, GSM8K, and HumanEval, demonstrate that LATA outperforms existing methods in both multi-task learning and selective task forgetting, achieving higher task accuracy and alignment with minimal degradation in output quality. Our findings highlight the importance of layer-wise analysis in disentangling task-specific and general-purpose knowledge, offering a robust framework for efficient model merging and editing.
Safety Alignment Depth in Large Language Models: A Markov Chain Perspective
Kao, Ching-Chia, Yu, Chia-Mu, Lu, Chun-Shien, Chen, Chu-Song
Large Language Models (LLMs) are increasingly adopted in high-stakes scenarios, yet their safety mechanisms often remain fragile. Simple jailbreak prompts or even benign fine-tuning can bypass these protocols, underscoring the need to understand where and how they fail. Recent findings suggest that vulnerabilities emerge when alignment is confined to only the initial output tokens. Unfortunately, even with the introduction of deep safety alignment, determining the optimal safety depth remains an unresolved challenge. By leveraging the equivalence between autoregressive language models and Markov chains, this paper offers the first theoretical result on how to identify the ideal depth for safety alignment, and demonstrates how permutation-based data augmentation can tighten these bounds. Crucially, we reveal a fundamental interaction between alignment depth and ensemble width-indicating that broader ensembles can compensate for shallower alignments. These insights provide a theoretical foundation for designing more robust, scalable safety strategies that complement existing alignment approaches, opening new avenues for research into safer, more reliable LLMs.
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors
Hsu, Chia-Yi, Tsai, Yu-Lin, Zhe, Yu, Chen, Yan-Lun, Lin, Chih-Hsun, Yu, Chia-Mu, Zhang, Yang, Huang, Chun-Ying, Sakuma, Jun
Task arithmetic in large-scale pre-trained models enables flexible adaptation to diverse downstream tasks without extensive re-training. By leveraging task vectors (TVs), users can perform modular updates to pre-trained models through simple arithmetic operations like addition and subtraction. However, this flexibility introduces new security vulnerabilities. In this paper, we identify and evaluate the susceptibility of TVs to backdoor attacks, demonstrating how malicious actors can exploit TVs to compromise model integrity. By developing composite backdoors and eliminating redudant clean tasks, we introduce BadTV, a novel backdoor attack specifically designed to remain effective under task learning, forgetting, and analogies operations. Our extensive experiments reveal that BadTV achieves near-perfect attack success rates across various scenarios, significantly impacting the security of models using task arithmetic. We also explore existing defenses, showing that current methods fail to detect or mitigate BadTV. Our findings highlight the need for robust defense mechanisms to secure TVs in real-world applications, especially as TV services become more popular in machine-learning ecosystems.
Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Huang, Zi-Xuan, Chen, Jia-Wei, Zhang, Zhi-Peng, Yu, Chia-Mu
Visual prompting (VP) is a new technique that adapts well-trained frozen models for source domain tasks to target domain tasks. This study examines VP's benefits for black-box model-level backdoor detection. The visual prompt in VP maps class subspaces between source and target domains. We identify a misalignment, termed class subspace inconsistency, between clean and poisoned datasets. Deep neural networks (DNNs) are commonly used in complex applications but require extensive computational power, leading to significant costs. However, DNNs can include backdoors (Gu et al., 2017; Liu et al., 2018b; Tang et al., 2021; Qi et al., 2023b; Nguyen & Tran, 2021; Chen et al., 2017), which manipulate model responses to inputs with specific triggers (like certain pixel patterns) while functioning correctly on other inputs. In backdoor attacks, attackers embed these triggers in the training data, leading the model to associate the trigger with a particular outcome and misclassify inputs containing it. Black-box backdoor detection, which uses only blackbox queries to the suspicious model (i.e., the model to be inspected), is gaining attention.
Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Kao, Ching-Chia, Yu, Chia-Mu, Lu, Chun-Shien, Chen, Chu-Song
Stable Diffusion Prompt: "A tall medieval knight in shining armor standing in a dense, dark forest, surrounded by towering trees with sunlight filtering through the leaves, the knight holding a sword and shield, an ancient castle visible in the distant mist, cinematic lighting." Example 3: Question: "How does a sunset over the ocean look?" Stable Diffusion Prompt: "A stunning sunset over the calm ocean, orange and pink skies reflecting on the water, waves gently rolling towards the shore, distant mountains on the horizon, seagulls flying overhead, a peaceful, serene atmosphere, watercolor painting style." Example 4: Question: "What does a futuristic robot look like in a factory?" Stable Diffusion Prompt: "A sleek, silver futuristic robot with glowing blue eyes working in a large industrial factory, surrounded by mechanical equipment, robotic arms assembling products, bright overhead lights casting shadows, high-tech machinery and cables everywhere, realistic style." Example 5: Question: "What does a peaceful village look like at night?" Stable Diffusion Prompt: "A quiet, peaceful village at night, small wooden houses with glowing windows,
Differentially Private Fine-Tuning of Diffusion Models
Tsai, Yu-Lin, Li, Yizhe, Chen, Zekai, Chen, Po-Yu, Yu, Chia-Mu, Ren, Xuebin, Buet-Golfouse, Francois
The integration of Differential Privacy (DP) with diffusion models (DMs) presents a promising yet challenging frontier, particularly due to the substantial memorization capabilities of DMs that pose significant privacy risks. Differential privacy offers a rigorous framework for safeguarding individual data points during model training, with Differential Privacy Stochastic Gradient Descent (DP-SGD) being a prominent implementation. Diffusion method decomposes image generation into iterative steps, theoretically aligning well with DP's incremental noise addition. Despite the natural fit, the unique architecture of DMs necessitates tailored approaches to effectively balance privacy-utility trade-off. Recent developments in this field have highlighted the potential for generating high-quality synthetic data by pre-training on public data (i.e., ImageNet) and fine-tuning on private data, however, there is a pronounced gap in research on optimizing the trade-offs involved in DP settings, particularly concerning parameter efficiency and model scalability. Our work addresses this by proposing a parameter-efficient fine-tuning strategy optimized for private diffusion models, which minimizes the number of trainable parameters to enhance the privacy-utility trade-off. We empirically demonstrate that our method achieves state-of-the-art performance in DP synthesis, significantly surpassing previous benchmarks on widely studied datasets (e.g., with only 0.47M trainable parameters, achieving a more than 35% improvement over the previous state-of-the-art with a small privacy budget on the CelebA-64 dataset). Anonymous codes available at https://anonymous.4open.science/r/DP-LORA-F02F.