bau
Poisoning Data to Protect It
After they released a tool designed to foil facial recognition systems in 2020, computer scientist Ben Zhao and his colleagues at the University of Chicago received a confusing email. Their solution, Fawkes, subtly alters the pixels in digital portraits, rendering images incomprehensible to automated facial recognition systems. So when an artist emailed Zhao to ask whether Fawkes might be used to protect her work, he did not see the connection. Then news of revolutionary generative artificial intelligence (AI) solutions like Midjourney and Dall-E began to spread. Digital illustrations, photographs, and other visual works had been scraped from the Internet to train various generative models without the consent of the creators.
- North America > United States > Illinois > Cook County > Chicago (0.25)
- Asia > Middle East > Jordan (0.05)
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System
Zhang, Peixin, Sun, Jun, Tan, Mingtian, Wang, Xinyu
In recent years, the security issues of artificial intelligence have become increasingly prominent due to the rapid development of deep learning research and applications. Backdoor attack is an attack targeting the vulnerability of deep learning models, where hidden backdoors are activated by triggers embedded by the attacker, thereby outputting malicious predictions that may not align with the intended output for a given input. In this work, we propose a novel black-box backdoor attack based on machine unlearning. The attacker first augments the training set with carefully designed samples, including poison and mitigation data, to train a `benign' model. Then, the attacker posts unlearning requests for the mitigation samples to remove the impact of relevant data on the model, gradually activating the hidden backdoor. Since backdoors are implanted during the iterative unlearning process, it significantly increases the computational overhead of existing defense methods for backdoor detection or mitigation. To address this new security threat, we proposes two methods for detecting or mitigating such malicious unlearning requests. We conduct the experiment in both exact unlearning and approximate unlearning (i.e., SISA) settings. Experimental results indicate that: 1) our attack approach can successfully implant backdoor into the model, and sharding increases the difficult of attack; 2) our detection algorithms are effective in identifying the mitigation samples, while sharding reduces the effectiveness of our detection algorithms.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- (18 more...)
Rewriting the rules of machine-generated art
Horses don't normally wear hats, and deep generative models, or GANs, don't normally follow rules laid out by human programmers. But a new tool developed at MIT lets anyone go into a GAN and tell the model, like a coder, to put hats on the heads of the horses it draws. In a new study appearing at the European Conference on Computer Vision this month, researchers show that the deep layers of neural networks can be edited, like so many lines of code, to generate surprising images no one has seen before. "GANs are incredible artists, but they're confined to imitating the data they see," says the study's lead author, David Bau, a PhD student at MIT. "If we can rewrite the rules of a GAN directly, the only limit is human imagination." Generative adversarial networks, or GANs, pit two neural networks against each other to create hyper-realistic images and sounds.
Rewriting the rules of machine-generated art
Horses don't normally wear hats, and deep generative models, or GANs, don't normally follow rules laid out by human programmers. But a new tool developed at MIT lets anyone go into a GAN and tell the model, like a coder, to put hats on the heads of the horses it draws. In a new study appearing at the European Conference on Computer Vision this month, researchers show that the deep layers of neural networks can be edited, like so many lines of code, to generate surprising images no one has seen before. "GANs are incredible artists, but they're confined to imitating the data they see," says the study's lead author, David Bau, a Ph.D. student at MIT. "If we can rewrite the rules of a GAN directly, the only limit is human imagination." Generative adversarial networks, or GANs, pit two neural networks against each other to create hyper-realistic images and sounds. One neural network, the generator, learns to mimic the faces it sees in photos, or the words it hears spoken.
A New AI System Could Create More Hope For People With Epilepsy
Recently, a team of researchers from the MIT-IBM Watson AI Lab created a method of displaying what a Generative Adversarial Network leaves out of an image when asked to generate images. The study was dubbed Seeing What a GAN Cannot Generate, and it was recently presented at the International Conference on Computer Vision. Generative Adversarial Networks have become more robust, sophisticated, and widely used in the past few years. They've become quite good at rendering images full of detail, as long as that image is confined to a relatively small area. However, when GANs are used to generate images of larger scenes and environments, they tend not to perform as well. In scenarios where GANs are asked to render scenes full of many objects and items, like a busy street, GANs often leave many important aspects of the image out.
- Health & Medicine > Therapeutic Area > Neurology > Epilepsy (0.85)
- Health & Medicine > Therapeutic Area > Genetic Disease (0.85)
Visualizing an AI model's blind spots
Anyone who has spent time on social media has probably noticed that GANs, or generative adversarial networks, have become remarkably good at drawing faces. They can predict what you'll look like when you're old and what you'd look like as a celebrity. But ask a GAN to draw scenes from the larger world and things get weird. A new demo by the MIT-IBM Watson AI Lab reveals what a model trained on scenes of churches and monuments decides to leave out when it draws its own version of, say, the Pantheon in Paris, or the Piazza di Spagna in Rome. The larger study, Seeing What a GAN Cannot Generate, was presented at the International Conference on Computer Vision this week. "Researchers typically focus on characterizing and improving what a machine-learning system can do -- what it pays attention to, and how particular inputs lead to particular outputs," says David Bau, a graduate student at MIT's Department of Electrical Engineering and Computer Science and Computer Science and Artificial Science Laboratory (CSAIL).
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- Asia > China > Hong Kong (0.05)
Unlocking the black box of AI reasoning -- GCN
While artificial intelligence has proved effective at many tasks critical to government -- such as protecting power grids against hacking -- some agencies have been reluctant to employ AI tools because their inner workings are unintelligible to humans. How can a solution be trusted if nobody knows how it works? With advanced technologies like artificial intelligence and machine learning, manipulated digital media will be easier to create and more difficult to detect. David Bau, a Ph.D. student at the Massachusetts Institute of Technology, thinks generative adversarial networks may help show how AI algorithms reach their conclusions. Bau and others are testing GANs not only as tools for performing tasks, such as pattern recognition, but for examining how neural networks made decisions.
Teaching artificial intelligence to create visuals with more common sense
GANpaint Studio could also be used to improve and debug other GANs that are being developed, by analyzing them for "artifact" units that need to be removed. In a world where opaque AI tools have made image manipulation easier than ever, it could help researchers better understand neural networks and their underlying structures. "Right now, machine learning systems are these black boxes that we don't always know how to improve, kind of like those old TV sets that you have to fix by hitting them on the side," says Bau, lead author on a related paper about the system with a team overseen by Torralba. "This research suggests that, while it might be scary to open up the TV and take a look at all the wires, there's going to be a lot of meaningful information in there." One unexpected discovery is that the system actually seems to have learned some simple rules about the relationships between objects.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- Europe > Finland (0.05)
Explainable AI: Viewing the world through the eyes of neural networks
One of the most intriguing artificial intelligence techniques was conceived when a few computer scientists where discussing deep learning and photorealistic images at a Montreal pub in 2014. Called generative adversarial networks (GAN), the concept has enabled the AI industry to take huge leaps toward creativity, generating images and sounds that are very close to their natural counterparts. However, like other AI techniques that use deep learning and neural networks, GANs are opaque, which means there's very little visibility or control on how they work. As a result, engineers find it hard to troubleshoot them, and users find it hard to trust them. To overcome these limitations, researchers at IBM and MIT have developed a technique called "GAN Dissection" that helps explore the inner workings of GANs and better understand the reasoning that results in their output.
A neural network can learn to organize the world it sees into concepts--just like we do
GANs, or generative adversarial networks, are the social-media starlet of AI algorithms. They are responsible for creating the first AI painting ever sold at an art auction and for superimposing celebrity faces on the bodies of porn stars. They work by pitting two neural networks against each other to create realistic outputs based on what they are fed. Feed one lots of dog photos, and it can create completely new dogs; feed it lots of faces, and it can create new faces. As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs are also a powerful tool: because they paint what they're "thinking," they could give humans insight into how neural networks learn and reason.