Goto

Collaborating Authors

Understanding Convolutional Networks with APPLE : Automatic Patch Pattern Labeling for Explanation

arXiv.org Machine Learning

With the success of deep learning, recent efforts have been focused on analyzing how learned networks make their classifications. We are interested in analyzing the network output based on the network structure and information flow through the network layers. We contribute an algorithm for 1) analyzing a deep network to find neurons that are 'important' in terms of the network classification outcome, and 2)automatically labeling the patches of the input image that activate these important neurons. We propose several measures of importance for neurons and demonstrate that our technique can be used to gain insight into, and explain how a network decomposes an image to make its final classification.


Adversarial Patch Camouflage against Aerial Detection

arXiv.org Artificial Intelligence

Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage. The traditional way of hiding military assets from sight is camouflage, for example by using camouflage nets. However, large assets like planes or vessels are difficult to conceal by means of traditional camouflage nets. An alternative type of camouflage is the direct misleading of automatic object detectors. Recently, it has been observed that small adversarial changes applied to images of the object can produce erroneous output by deep learning-based detectors. In particular, adversarial attacks have been successfully demonstrated to prohibit person detections in images, requiring a patch with a specific pattern held up in front of the person, thereby essentially camouflaging the person for the detector. Research into this type of patch attacks is still limited and several questions related to the optimal patch configuration remain open. This work makes two contributions. First, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance, where the patch is laid on top of large military assets, camouflaging them from automatic detectors running over the imagery. The patch can prevent automatic detection of the whole object while only covering a small part of it. Second, we perform several experiments with different patch configurations, varying their size, position, number and saliency. Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities, and should therefore be considered in the automated analysis of aerial surveillance imagery.


Dynamic Foreground/Background Extraction from Images and Videos using Random Patches

Neural Information Processing Systems

In this paper, we propose a novel exemplar-based approach to extract dynamic foreground regions from a changing background within a collection of images or a video sequence. By using image segmentation as a pre-processing step, we convert this traditional pixel-wise labeling problem into a lower-dimensional supervised, binarylabeling procedure on image segments. Our approach consists of three steps. First, a set of random image patches are spatially and adaptively sampled withineach segment. Second, these sets of extracted samples are formed into two "bags of patches" to model the foreground/background appearance, respectively.


Adversarial Training against Location-Optimized Adversarial Patches

arXiv.org Machine Learning

Deep neural networks have been shown to be susceptible to adversarial examples -- small, imperceptible changes constructed to cause mis-classification in otherwise highly accurate image classifiers. As a practical alternative, recent work proposed so-called adversarial patches: clearly visible, but adversarially crafted rectangular patches in images. These patches can easily be printed and applied in the physical world. While defenses against imperceptible adversarial examples have been studied extensively, robustness against adversarial patches is poorly understood. In this work, we first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image. Then, we apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB. Additionally, in contrast to adversarial training on imperceptible adversarial examples, our adversarial patch training does not reduce accuracy.


QSMGAN: Improved Quantitative Susceptibility Mapping using 3D Generative Adversarial Networks with Increased Receptive Field

arXiv.org Machine Learning

Quantitative susceptibility mapping (QSM) is a powerful MRI technique that has shown great potential in quantifying tissue susceptibility in numerous neurological disorders. However, the intrinsic ill-posed dipole inversion problem greatly affects the accuracy of the susceptibility map. We proposed QSMGAN: a 3D deep convolutional neural network approach based on improved U-Net with increased phase receptive field and further refined the network using the WGAN-GP training strategy. Our method could generate accurate and realistic QSM from single orientation phase maps efficiently and performed significantly better than traditional non-learning-based dipole inversion algorithms.