Goto

Collaborating Authors

Everything You Need to Know About Adversarial Machine Learning

#artificialintelligence

Machine learning is a key aspect of Artificial Intelligence. However, one area that has always been an issue to worry about is adversarial attacks. It is because of this that the models trained to work in a particular way fail to do so and act in undesired ways. Computer vision is one of those areas that has grabbed eyeballs from everywhere around. This is the area where the AI systems deployed aid in processing the visual data.


Clean-label Backdoor Attack against Deep Hashing based Retrieval

arXiv.org Artificial Intelligence

Deep hashing has become a popular method in large-scale image retrieval due to its computational and storage efficiency. However, recent works raise the security concerns of deep hashing. Although existing works focus on the vulnerability of deep hashing in terms of adversarial perturbations, we identify a more pressing threat, backdoor attack, when the attacker has access to the training data. A backdoored deep hashing model behaves normally on original query images, while returning the images with the target label when the trigger presents, which makes the attack hard to be detected. In this paper, we uncover this security concern by utilizing clean-label data poisoning. To the best of our knowledge, this is the first attempt at the backdoor attack against deep hashing models. To craft the poisoned images, we first generate the targeted adversarial patch as the backdoor trigger. Furthermore, we propose the confusing perturbations to disturb the hashing code learning, such that the hashing model can learn more about the trigger. The confusing perturbations are imperceptible and generated by dispersing the images with the target label in the Hamming space. We have conducted extensive experiments to verify the efficacy of our backdoor attack under various settings. For instance, it can achieve 63% targeted mean average precision on ImageNet under 48 bits code length with only 40 poisoned images.


Collaborative Learning: Next great frontiers in AI

#artificialintelligence

The field of machine learning is constantly evolving, sometimes slowly, and at other times we experience the tech equivalent of the Cambrian Explosion with rapid advance that makes a good many data scientists experience a serious case of imposter syndrome. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights emerging areas within AI that are poised to redefine the field -- and society -- in the years ahead. Unsupervised learning more closely mirrors the way that humans learn about the world: through open-ended exploration and inference, without a need for the "training wheels" of supervised learning.


Council Post: What You Need To Know About The New Threat: Poisoned AI

#artificialintelligence

John Giordani has extensive experience in cybersecurity and information assurance. He is Chief Information Security Officer at NCHENG LLP. Machine learning relies on data to make predictions. Data is just information, and information can be stored in almost any medium and be called a "dataset." Datasets are great sources of information, but they are not always reliable. That's where artificial intelligence comes in.


How to protect your machine learning models against adversarial attacks

#artificialintelligence

Machine learning has become an important component of many applications we use today. And adding machine learning capabilities to applications is becoming increasingly easy. Many ML libraries and online services don't even require a thorough knowledge of machine learning. However, even easy-to-use machine learning systems come with their own challenges. Among them is the threat of adversarial attacks, which has become one of the important concerns of ML applications.