Goto

Collaborating Authors

 shadow model




Appendix

Neural Information Processing Systems

SVHN is another 32-resolution benchmark for color image classification. It consists of street-viewimages ofdoor numbers, which are labelled with digits from "0" to"9".



_NeurIPS2023_CR__Certified_Backdoor_Detection.pdf

weixi

Neural Information Processing Systems

Thus, we did not create new threats to society. Moreover, our work provides a new perspective on backdoor defense, as it is the first to address the certification of backdoor detection. This assumption holds in general in practice. In our setting, this is reflected by a small samplewise local probability for the labeled class for most samples used for computing LDP, which may easily lead to a large LDP . In the following, we show that a larger deviation of the learned decision boundary of a binary Bayesian classifier will affect its LDP .



M4I: Multi-modalModels Membership Inference

Neural Information Processing Systems

ROUGE-N scores are the overlapping of n-grams [2] between the generated and referencesequence. Those scores are then averaged overthe whole corpus toreach anoverall quality. For both proposed MMMMI attack methods, shadow models are indispensable. The first hidden layer in the attack model has 256 units and the second hidden layer has20units, bothactivatedbyReLU function. We used resnet-LSTM architecture as the target model architecture.


Gaussian Membership Inference Privacy

Neural Information Processing Systems

We propose a novel and practical privacy notion called $f$-Membership Inference Privacy ($f$-MIP), which explicitly considers the capabilities of realistic adversaries under the membership inference attack threat model. Consequently, $f$-MIP offers interpretable privacy guarantees and improved utility (e.g., better classification accuracy). In particular, we derive a parametric family of $f$-MIP guarantees that we refer to as $\mu$-Gaussian Membership Inference Privacy ($\mu$-GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD). Our analysis highlights that models trained with standard SGD already offer an elementary level of MIP. Additionally, we show how $f$-MIP can be amplified by adding noise to gradient updates.



AttackPilot: Autonomous Inference Attacks Against ML Services With LLM-Based Agents

Wu, Yixin, Wen, Rui, Cui, Chi, Backes, Michael, Zhang, Yang

arXiv.org Artificial Intelligence

Inference attacks have been widely studied and offer a systematic risk assessment of ML services; however, their implementation and the attack parameters for optimal estimation are challenging for non-experts. The emergence of advanced large language models presents a promising yet largely unexplored opportunity to develop autonomous agents as inference attack experts, helping address this challenge. In this paper, we propose AttackPilot, an autonomous agent capable of independently conducting inference attacks without human intervention. We evaluate it on 20 target services. The evaluation shows that our agent, using GPT-4o, achieves a 100.0% task completion rate and near-expert attack performance, with an average token cost of only $0.627 per run. The agent can also be powered by many other representative LLMs and can adaptively optimize its strategy under service constraints. We further perform trace analysis, demonstrating that design choices, such as a multi-agent framework and task-specific action spaces, effectively mitigate errors such as bad plans, inability to follow instructions, task context loss, and hallucinations. We anticipate that such agents could empower non-expert ML service providers, auditors, or regulators to systematically assess the risks of ML services without requiring deep domain expertise.