blurring
Watch out! Motion is Blurring the Vision of Your Deep Neural Networks
The state-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of which greatly important especially for the widely adopted real-time image processing tasks (e.g., object detection, tracking). In this paper, we initiate the first step to comprehensively investigate the potential hazards of blur effect for DNN, caused by object motion. We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA). To this end, we first formulate the kernel-prediction-based attack where an input image is convolved with kernels in a pixel-wise way, and the misclassification capability is achieved by tuning the kernel weights.
Review for NeurIPS paper: Watch out! Motion is Blurring the Vision of Your Deep Neural Networks
Weaknesses: Even though the author claims that the proposed method is able to generate adversarial images of more plausible appearance, compared with other noise-based methods, I don't think motion blur is a good choice for the adversarial attack algorithms. Motion blurs are more notable in the images and easier to detect in the input compared with the noise-based attacks. The goal for generating adversarial images is to improve the classifier's performance when encounter two images having similar high-level features or visually the same. However, the introducing of motion blur can change the global consistency among the high-level features of classifier. The author states that the proposed motion blur attacks are hard to remove by deblurring methods than normal motion blurs, which in my opinion, doesn't make any sense. Based on the results and how the motion blur is constructed in this paper, the synthesized blurs are more likely to be applied on the whole image, instead of on a specific object (Figure 2 and 5).
Review for NeurIPS paper: Watch out! Motion is Blurring the Vision of Your Deep Neural Networks
This paper presents a novel adversarial attack method based on motion blur. The method can generate visually natural motion-blurred images that can fool DNNs for visual recognition. The paper is well written, and the proposed methods are convincing. One reviewer is convinced on the goodness of the paper and suggest a clear acceptance. A second and a third ones consider the paper above the acceptance threshold, being the problem very interesting and the approach clear.
Watch out! Motion is Blurring the Vision of Your Deep Neural Networks
The state-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of which greatly important especially for the widely adopted real-time image processing tasks (e.g., object detection, tracking). In this paper, we initiate the first step to comprehensively investigate the potential hazards of blur effect for DNN, caused by object motion. We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA). To this end, we first formulate the kernel-prediction-based attack where an input image is convolved with kernels in a pixel-wise way, and the misclassification capability is achieved by tuning the kernel weights.
Privacy Enhancement for Cloud-Based Few-Shot Learning
Parnami, Archit, Usama, Muhammad, Fan, Liyue, Lee, Minwoo
Requiring less data for accurate models, few-shot learning has shown robustness and generality in many application domains. However, deploying few-shot models in untrusted environments may inflict privacy concerns, e.g., attacks or adversaries that may breach the privacy of user-supplied data. This paper studies the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud, by establishing a novel privacy-preserved embedding space that preserves the privacy of data and maintains the accuracy of the model. We examine the impact of various image privacy methods such as blurring, pixelization, Gaussian noise, and differentially private pixelization (DP-Pix) on few-shot image classification and propose a method that learns privacy-preserved representation through the joint loss. The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
- North America > United States > North Carolina (0.04)
- Europe > Italy (0.04)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
Researchers Blur Faces That Launched a Thousand Algorithms
In 2012, artificial intelligence researchers engineered a big leap in computer vision thanks, in part, to an unusually large set of images--thousands of everyday objects, people, and scenes in photos that were scraped from the web and labeled by hand. That data set, known as ImageNet, is still used in thousands of AI research projects and experiments today. But last week every human face included in ImageNet suddenly disappeared--after the researchers who manage the data set decided to blur them. Just as ImageNet helped usher in a new age of AI, efforts to fix it reflect challenges that affect countless AI programs, data sets, and products. "We were concerned about the issue of privacy," says Olga Russakovsky, an assistant professor at Princeton University and one of those responsible for managing ImageNet.
How AI is Blurring the Lines Between Martech, Adtech; Is Broadcast TV's Future OTT?
It's become increasingly important for women to stop worrying about being perfect and find the courage to take action to advance their careers, according to Nell Merlino, creator of Take Our Daughters to Work Day and founder and president of Count Me In for Women's Economic Independence. "None of us can do everything," but "we are all best at something," so we all need to figure out what we're best at and then "lay claim to" that expertise and make sure other people know that also, she said during the keynote "Courage versus Perfection" at the Oct. 4 SoCal Women's Leadership Group annual meeting at the Skirball Cultural Center in Los Angeles. The event was co-located with the Hollywood Innovation & Technology Summit (HITS) Fall event.
- Media > Television (0.40)
- Leisure & Entertainment (0.40)
Asia Summit 2016 Panel: Artificial Intelligence: Blurring the Lines Between Humans and Machines
For decades, futurists and science fiction writers predicted that smart machines would someday rival the intelligence of humans. Now, their forecasts seem to be coming true. Artificial intelligence, or AI, already exceeds human capability in certain fields. Machines can send and receive signals and analyze vast quantities of data faster than humans. They have learned to drive cars, manage stock portfolios and, through personal assistants such as Siri and Alexa, talk to us.
Blurring the Boundary Between Man and Machines: Are Humans the New Supercomputer?
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit3, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics.