Goto

Collaborating Authors

Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural Networks

Neural Information Processing Systems

Modern machine learning systems are susceptible to adversarial examples; inputs which clearly preserve the characteristic semantics of a given class, but whose classification is (usually confidently) incorrect. Existing approaches to adversarial defense generally rely on modifying the input, e.g. However, recent research has shown that most such approaches succumb to adversarial examples when different norms or more sophisticated adaptive attacks are considered. In this paper, we propose a fundamentally different approach which instead changes the way the output is represented and decoded. This simple approach achieves state-of-the-art robustness to adversarial examples for L 2 and L based adversarial perturbations on MNIST and CIFAR10.


Graphical Generative Adversarial Networks

Neural Information Processing Systems

We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data. We introduce a structured recognition model to infer the posterior distribution of latent variables given observations. We generalize the Expectation Propagation (EP) algorithm to learn the generative model and recognition model jointly. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN), which can successfully learn the discrete and temporal structures on visual datasets, respectively. Papers published at the Neural Information Processing Systems Conference.


6 Times AI Tried to Get Creative, and How the Results Turned Out

#artificialintelligence

Breakthroughs in neural networks--a type of machine learning that vaguely imitates the structure of neurons in the brain--have given rise to a form of the technology called generative AI that can do everything from imitate photorealistic images and abstract art to composing music or writing. While these tools have raised concerns over their potential use for fabricated news footage and circumventing copyright laws, the vast majority of content produced by this type of AI still has a slightly off-kilter quality that betrays its non-human creator. As the cultural debate around AI-fueled art begins to heat up, we're looking back on what kind of work has actually come out of the initial experiments in this space. Here are six examples of AI's use in creative processes that offer a sense of the current state of the technology and a hint at its larger potential: Google's DeepDream computer vision software, first released in 2015, turns any image into an abstract hallucinogenic version of itself by finding and enhancing certain patterns within the image. While the system might have little practical use for creative professionals on its face, it represented an early foray into the type of AI-generated art that has come to proliferate the open-source community.


Cross-Modal Learning with Adversarial Samples

Neural Information Processing Systems

With the rapid developments of deep neural networks, numerous deep cross-modal analysis methods have been presented and are being applied in widespread real-world applications, including healthcare and safety-critical environments. However, the recent studies on robustness and stability of deep neural networks show that a microscopic modification, known as adversarial sample, which is even imperceptible to humans, can easily fool a well-performed deep neural network and brings a new obstacle to deep cross-modal correlation exploring. In this paper, we propose a novel Cross-Modal correlation Learning with Adversarial samples, namely CMLA, which for the first time presents the existence of adversarial samples in cross-modal data. Moreover, we provide a simple yet effective adversarial sample learning method, where inter- and intra- modality similarity regularizations across different modalities are simultaneously integrated into the learning of adversarial samples. Finally, our proposed CMLA is demonstrated to be highly effective in cross-modal hashing based retrieval.


Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network"

arXiv.org Machine Learning

A recent paper [1] by Liu et al. combines the topics of adversarial training and Bayesian Neural Networks (BNN) and suggests that adversarially trained BNNs are more robust against adversarial attacks than their non-Bayesian counterparts. Here, I analyze the proposed defense and suggest that one needs to adjust the adversarial attack to incorporate the stochastic nature of a Bayesian network to perform an accurate evaluation of its robustness. Using this new type of attack I show that there appears to be no strong evidence for higher robustness of the adversarially trained BNNs. The evaluation of a neural network has proven to be a complex and difficult task, as one needs to separate two causes for the same observation - the robustness of the defended network and the shortcomings of the attack. If a network appears to be robust, this can either mean that it is in fact robust against adversarial attacks or that the attack is incomplete or relies on inapplicable assumptions on the attacked network.