Goto

Collaborating Authors

 papernot



UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI

Shumailov, Ilia, Hayes, Jamie, Triantafillou, Eleni, Ortiz-Jimenez, Guillermo, Papernot, Nicolas, Jagielski, Matthew, Yona, Itay, Howard, Heidi, Bagdasaryan, Eugene

arXiv.org Artificial Intelligence

Exact unlearning was first introduced as a privacy mechanism that allowed a user to retract their data from machine learning models on request. Shortly after, inexact schemes were proposed to mitigate the impractical costs associated with exact unlearning. More recently unlearning is often discussed as an approach for removal of impermissible knowledge i.e. knowledge that the model should not possess such as unlicensed copyrighted, inaccurate, or malicious information. The promise is that if the model does not have a certain malicious capability, then it cannot be used for the associated malicious purpose. In this paper we revisit the paradigm in which unlearning is used for in Large Language Models (LLMs) and highlight an underlying inconsistency arising from in-context learning. Unlearning can be an effective control mechanism for the training phase, yet it does not prevent the model from performing an impermissible act during inference. We introduce a concept of ununlearning, where unlearned knowledge gets reintroduced in-context, effectively rendering the model capable of behaving as if it knows the forgotten knowledge. As a result, we argue that content filtering for impermissible knowledge will be required and even exact unlearning schemes are not enough for effective content regulation. We discuss feasibility of ununlearning for modern LLMs and examine broader implications.


AI language models are running out of human-written text to learn from

FOX News

UPenn Wharton School Associate Professor Ethan Mollick weighs in on the Biden White House's new guidelines for artificial intelligence in the workplace on'Fox News Live.' Artificial intelligence systems like ChatGPT could soon run out of what keeps making them smarter -- the tens of trillions of words people have written and shared online. A new study released Thursday by research group Epoch AI projects that tech companies will exhaust the supply of publicly available training data for AI language models by roughly the turn of the decade -- sometime between 2026 and 2032. Comparing it to a "literal gold rush" that depletes finite natural resources, Tamay Besiroglu, an author of the study, said the AI field might face challenges in maintaining its current pace of progress once it drains the reserves of human-generated writing. In the short term, tech companies like ChatGPT-maker OpenAI and Google are racing to secure and sometimes pay for high-quality data sources to train their AI large language models – for instance, by signing deals to tap into the steady flow of sentences coming out of Reddit forums and news media outlets. In the longer term, there won't be enough new blogs, news articles and social media commentary to sustain the current trajectory of AI development, putting pressure on companies to tap into sensitive data now considered private -- such as emails or text messages -- or relying on less-reliable "synthetic data" spit out by the chatbots themselves.


When Synthetic Data Met Regulation

Ganev, Georgi

arXiv.org Artificial Intelligence

But in practice Generative AI has made significant progress recently, with the actual identifiability of individuals can be highly applications spanning text, code, image, video, speech, and context-specific as different types of information carry different structured data (Sequoia Capital, 2022). Investor interest has levels of identifiability risks depending on the circumstances. However, whether the ChatGPT (Bloomberg, 2023), which has reached 100M resultant synthetic data constitutes personal or anonymous monthly users (Reuters, 2023). This raises the question, as well. Active legal cases against Generative AI companies what constitutes a sufficient level of anonymization.


AI Is an Existential Threat to Itself

The Atlantic - Technology

In the beginning, the chatbots and their ilk fed on the human-made internet. Various generative-AI models of the sort that power ChatGPT got their start by devouring data from sites including Wikipedia, Getty, and Scribd. They consumed text, images, and other content, learning through algorithmic digestion their flavors and texture, which ingredients go well together and which do not, in order to concoct their own art and writing. Generative AI is utterly reliant on the sustenance it gets from the web: Computers mime intelligence by processing almost unfathomable amounts of data and deriving patterns from them. ChatGPT can write a passable high-school essay because it has read libraries' worth of digitized books and articles, while DALL-E 2 can produce Picasso-esque images because it has analyzed something like the entire trajectory of art history.


Can AI Learn to Forget?

Communications of the ACM

Machine learning has emerged as a valuable tool for spotting patterns and trends that might otherwise escape humans. The technology, which can build elaborate models based on everything from personal preferences to facial recognition, is used widely to understand behavior, spot patterns and trends, and make informed predictions. Yet for all the gains, there is also plenty of pain. A major problem associated with machine learning is that once an algorithm or model exists, expunging individual records or chunks of data is extraordinarily difficult. In most cases, it is necessary to retrain the entire model--sometimes with no assurance that that model will not continue to incorporate the suspect data in some way, says Gautam Kamath, an assistant professor in the David R. Cheriton School of Computer Science at the University of Waterloo in Canada.


Malevolent Machine Learning

Communications of the ACM

At the start of the decade, deep learning restored the reputation of artificial intelligence (AI) following years stuck in a technological winter. Within a few years of becoming computationally feasible, systems trained on thousands of labeled examples began to exceed the performance of humans on specific tasks. One was able to decode road signs that had been rendered almost completely unreadable by the bleaching action of the sun, for example. It just as quickly became apparent, however, that the same systems could just as easily be misled. In 2013, Christian Szegedy and colleagues working at Google Brain found subtle pixel-level changes, imperceptible to a human, that extended across the image would lead to a bright yellow U.S. school bus being classified by a deep neural network (DNN) as an ostrich.


Global Big Data Conference

#artificialintelligence

The use of computer vision technologies to boost machine learning continues to accelerate, driven by optimism that classifying huge volumes of images will unleash all sorts of new applications and forms of autonomy. But there's a darker side to this transformation: These learning systems remain remarkably easy to fool using so-called "adversarial attacks." Even worse is that leading researchers acknowledge they don't really have a solution for stopping mischief makers from wreaking havoc on these systems. "Can we defend against these attacks?" said Nicolas Papernot, a research scientist at Google Brain, the company's deep learning artificial intelligence research team. "Unfortunately, the answer is no."


How 'adversarial' attacks reveal machine learning's weakness

#artificialintelligence

The use of computer vision technologies to boost machine learning continues to accelerate, driven by optimism that classifying huge volumes of images will unleash all sorts of new applications and forms of autonomy. But there's a darker side to this transformation: These learning systems remain remarkably easy to fool using so-called "adversarial attacks." Even worse is that leading researchers acknowledge they don't really have a solution for stopping mischief makers from wreaking havoc on these systems. "Can we defend against these attacks?" said Nicolas Papernot, a research scientist at Google Brain, the company's deep learning artificial intelligence research team. "Unfortunately, the answer is no."


Defending Against Adversarial Examples with K-Nearest Neighbor

Sitawarin, Chawin, Wagner, David

arXiv.org Artificial Intelligence

Robustness is an increasingly important property of machine learning models as they become more and more prevalent. We propose a defense against adversarial examples based on a k-nearest neighbor (kNN) on the intermediate activation of neural networks. Our scheme surpasses state-of-the-art defenses on MNIST and CIFAR-10 against l2-perturbation by a significant margin. With our models, the mean perturbation norm required to fool our MNIST model is 3.07 and 2.30 on CIFAR-10. Additionally, we propose a simple certifiable lower bound on the l2-norm of the adversarial perturbation using a more specific version of our scheme, a 1-NN on representations learned by a Lipschitz network. Our model provides a nontrivial average lower bound of the perturbation norm, comparable to other schemes on MNIST with similar clean accuracy.