cloak
GLEAN: Generative Learning for Eliminating Adversarial Noise
Kim, Justin Lyu, Woo, Kyoungwan
In the age of powerful diffusion models such as DALL-E and Stable Diffusion, many in the digital art community have suffered style mimicry attacks due to fine-tuning these models on their works. The ability to mimic an artist's style via text-to-image diffusion models raises serious ethical issues, especially without explicit consent. Glaze, a tool that applies various ranges of perturbations to digital art, has shown significant success in preventing style mimicry attacks, at the cost of artifacts ranging from imperceptible noise to severe quality degradation. The release of Glaze has sparked further discussions regarding the effectiveness of similar protection methods. In this paper, we propose GLEAN- applying I2I generative networks to strip perturbations from Glazed images, evaluating the performance of style mimicry attacks before and after GLEAN on the results of Glaze. GLEAN aims to support and enhance Glaze by highlighting its limitations and encouraging further development.
- North America > United States > Massachusetts (0.04)
- North America > United States > California (0.04)
Model Hijacking Attack in Federated Learning
Li, Zheng, Wu, Siyuan, Chen, Ruichuan, Aditya, Paarijaat, Akkus, Istemi Ekin, Vanga, Manohar, Zhang, Min, Li, Hao, Zhang, Yang
Machine learning (ML), driven by prominent paradigms such as centralized and federated learning, has made significant progress in various critical applications ranging from autonomous driving to face recognition. However, its remarkable success has been accompanied by various attacks. Recently, the model hijacking attack has shown that ML models can be hijacked to execute tasks different from their original tasks, which increases both accountability and parasitic computational risks. Nevertheless, thus far, this attack has only focused on centralized learning. In this work, we broaden the scope of this attack to the federated learning domain, where multiple clients collaboratively train a global model without sharing their data. Specifically, we present HijackFL, the first-of-its-kind hijacking attack against the global model in federated learning. The adversary aims to force the global model to perform a different task (called hijacking task) from its original task without the server or benign client noticing. To accomplish this, unlike existing methods that use data poisoning to modify the target model's parameters, HijackFL searches for pixel-level perturbations based on their local model (without modifications) to align hijacking samples with the original ones in the feature space. When performing the hijacking task, the adversary applies these cloaks to the hijacking samples, compelling the global model to identify them as original samples and predict them accordingly. We conduct extensive experiments on four benchmark datasets and three popular models. Empirical results demonstrate that its attack performance outperforms baselines. We further investigate the factors that affect its performance and discuss possible defenses to mitigate its impact.
- North America > Canada > Ontario > Toronto (0.04)
- Asia (0.04)
- Law Enforcement & Public Safety > Terrorism (1.00)
- Information Technology (1.00)
- Transportation > Ground > Road (0.34)
Ulixes: Facial Recognition Privacy with Adversarial Machine Learning
Cilloni, Thomas, Wang, Wei, Walter, Charles, Fleming, Charles
Facial recognition tools are becoming exceptionally accurate in identifying people from images. However, this comes at the cost of privacy for users of online services with photo management (e.g. social media platforms). Particularly troubling is the ability to leverage unsupervised learning to recognize faces even when the user has not labeled their images. In this paper we propose Ulixes, a strategy to generate visually non-invasive facial noise masks that yield adversarial examples, preventing the formation of identifiable user clusters in the embedding space of facial encoders. This is applicable even when a user is unmasked and labeled images are available online. We demonstrate the effectiveness of Ulixes by showing that various classification and clustering methods cannot reliably label the adversarial examples we generate. We also study the effects of Ulixes in various black-box settings and compare it to the current state of the art in adversarial machine learning. Finally, we challenge the effectiveness of Ulixes against adversarially trained models and show that it is robust to countermeasures.
- North America > United States > Mississippi (0.04)
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (8 more...)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Law (0.93)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Enabling Inference Privacy with Adaptive Noise Injection
Kariyappa, Sanjay, Dia, Ousmane, Qureshi, Moinuddin K
User-facing software services are becoming increasingly reliant on remote servers to host Deep Neural Network (DNN) models, which perform inference tasks for the clients. Such services require the client to send input data to the service provider, who processes it using a DNN and returns the output predictions to the client. Due to the rich nature of the inputs such as images and speech, the input often contains more information than what is necessary to perform the primary inference task. Consequently, in addition to the primary inference task, a malicious service provider could infer secondary (sensitive) attributes from the input, compromising the client's privacy. The goal of our work is to improve inference privacy by injecting noise to the input to hide the irrelevant features that are not conducive to the primary classification task. T o this end, we propose Adaptive Noise Injection (ANI), which uses a light-weight DNN on the client-side to inject noise to each input, before transmitting it to the service provider to perform inference. Our key insight is that by customizing the noise to each input, we can achieve state-of-the-art trade-off between utility and privacy (up to 48. 5% degradation in sensitive-task accuracy with 1% degradation in primary accuracy), significantly outperforming existing noise injection schemes. Our method does not require prior knowledge of the sensitive attributes and incurs minimal computational overheads.
Can This AI Filter Protect Identities From Facial Recognition System?
Facial recognition technology has been a matter of grave concern since long, as much as to that, major tech giants like Microsoft, Amazon, IBM as well as Google have earlier this year, banned selling their FRT to police authorities. Additionally, Clearview AI's groundbreaking facial recognition app that scrapped billions of images of people without consent made the matter even worse for the public. In fact, the whole concept of companies using social media images of people without their permission to train their FRT algorithms can turn out to be troublesome for the general public's identity and personal privacy. And thus, to protect human identities from companies who can misuse them, researchers from the computer science department of the University of Chicago, proposed an AI system to fool these facial recognition systems. Termed as Fawkes -- named after the British soldier Guy Fawkes Night, this AI system has been designed to help users to safeguard their images and selfies with a filter from against these unfavored facial recognition models. This filter, as the researchers called it "cloak," adds an invisible pixel-level change on the photos that cannot be seen with human eyes, but can deceive these FRTs.
Adversarial T-shirt VS AI
Becoming invisible to cameras is difficult, and for now at least, you're going to look really funny to other humans if you try it. An absence of data, though, isn't the only way to foil a system. Instead, what if you make a point of being seen, and in doing so generate enough noise in a system that a single signal becomes harder to find? When you can't see under the hood of a system, it's harder to figure out how to foil it. Making something work is both an art and a science, and cracking the code requires a healthy degree of trial and error to figure out.
- Textiles, Apparel & Luxury Goods (0.45)
- Media > News (0.40)
A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference
Mireshghallah, Fatemehsadat, Taram, Mohammadkazem, Jalali, Ali, Elthakeb, Ahmed Taha, Tullsen, Dean, Esmaeilzadeh, Hadi
INFerence-as-a-Service (INFaaS) in the cloud has enabled the prevalent use of Deep Neural Networks (DNNs) in home automation, targeted advertising, machine vision, etc. The cloud receives the inference request as a raw input, containing a rich set of private information, that can be misused or leaked, possibly inadvertently. This prevalent setting can compromise the privacy of users during the inference phase. This paper sets out to provide a principled approach, dubbed Cloak, that finds optimal stochastic perturbations to obfuscate the private data before it is sent to the cloud. To this end, Cloak reduces the information content of the transmitted data while conserving the essential pieces that enable the request to be serviced accurately. The key idea is formulating the discovery of this stochasticity as an offline gradient-based optimization problem that reformulates a pre-trained DNN (with optimized known weights) as an analytical function of the stochastic perturbations. Using Laplace distribution as a parametric model for the stochastic perturbations, Cloak learns the optimal parameters using gradient descent and Monte Carlo sampling. This set of optimized Laplace distributions further guarantee that the injected stochasticity satisfies the -differential privacy criterion. Experimental evaluations with real-world datasets show that, on average, the injected stochasticity can reduce the information content in the input data by 80.07%, while incurring 7.12% accuracy loss.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.66)
Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Shan, Shawn, Wenger, Emily, Zhang, Jiayun, Li, Huiying, Zheng, Haitao, Zhao, Ben Y.
Today's proliferation of powerful facial recognition models poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data, and train highly accurate facial recognition models of us without our knowledge. We need tools to protect ourselves from unauthorized facial recognition systems and their numerous potential misuses. Unfortunately, work in related areas are limited in practicality and effectiveness. In this paper, we propose Fawkes, a system that allow individuals to inoculate themselves against unauthorized facial recognition models. Fawkes achieves this by helping users adding imperceptible pixel-level changes (we call them "cloaks") to their own photos before publishing them online. When collected by a third-party "tracker" and used to train facial recognition models, these "cloaked" images produce functional models that consistently misidentify the user. We experimentally prove that Fawkes provides 95+% protection against user recognition regardless of how trackers train their models. Even when clean, uncloaked images are "leaked" to the tracker and used for training, Fawkes can still maintain a 80+% protection success rate. In fact, we perform real experiments against today's state-of-the-art facial recognition services and achieve 100% success. Finally, we show that Fawkes is robust against a variety of countermeasures that try to detect or disrupt cloaks.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > China (0.04)
- North America > United States > Oregon (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Video reveals how patent-pending stealth material can hide objects by bending light
Invisibility cloak technology has been developed that bends light in order to make objects disappear. The material, which was created by Canada-based camouflage company Hyperstealth, could be used to hide large items such as army tanks or even to shield troops on the ground from enemies. Amazing video footage shows the screen in all its glory – in one clip a white sheet on the screen is visible, before a small miniature tank is revealed behind the screen. This is while another clip shows the screen in front of what looks like a tree, but it comes down, revealing a large housing complex. The company has been developing the technology for a number of years but has now applied for patents to begin the process of manufacturing it.
- North America > Canada (0.26)
- North America > United States (0.18)
- Law > Intellectual Property & Technology Law (0.73)
- Government > Military > Army (0.57)