Goto

Collaborating Authors

Global Big Data Conference

#artificialintelligence

We usually don't expect the image of a teacup to turn into a cat when we zoom out. But in the world of artificial intelligence research, strange things can happen. Researchers at Germany's Technische Universität Braunschweig have shown that carefully modifying the pixel values of digital photos can turn them into a completely different image when they are downscaled. What's concerning is the implications these modifications can have for AI algorithms. Malicious actors can use this image-scaling technique as a launchpad for adversarial attacks against machine learning models, the artificial intelligence algorithms used in computer vision tasks such as facial recognition and object detection.


Threat modelling geospatial machine learning systems - F-Secure Blog

#artificialintelligence

Machine learning models are set to play an increasing role in aiding decision-making processes in both governmental and commercial industries in the years to come. One noteworthy area where this is likely to happen is in the geospatial domain, where information obtained from GPS devices and satellite and aerial imagery is used to make both strategic and business decisions. It is thus important to understand how models in this domain stand up to adversarial attack and how trustworthy their outputs are. In April 2021, F-Secure conducted a threat analysis study of machine learning models in the geospatial domain. We investigated several possible attacks and attack goals and proposed mitigations against them.


Machine Learning Attack Series: Image Scaling Attacks · wunderwuzzi blog

#artificialintelligence

This post is part of a series about machine learning and artificial intelligence. Click on the blog tag "huskyai" to see related posts. A few weeks ago while preparing demos for my GrayHat 2020 - Red Team Village presentation I ran across "Image Scaling Attacks" in Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning by Erwin Quiring, et al. I thought that was so cool! The basic idea is to hide a smaller image inside a larger image (it should be about 5-10x the size).


FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning

arXiv.org Machine Learning

Deep neural networks (DNN)-based machine learning (ML) algorithms have recently emerged as the leading ML paradigm particularly for the task of classification due to their superior capability of learning efficiently from large datasets. The discovery of a number of well-known attacks such as dataset poisoning, adversarial examples, and network manipulation (through the addition of malicious nodes) has, however, put the spotlight squarely on the lack of security in DNN-based ML systems. In particular, malicious actors can use these well-known attacks to cause random/targeted misclassification, or cause a change in the prediction confidence, by only slightly but systematically manipulating the environmental parameters, inference data, or the data acquisition block. Most of the prior adversarial attacks have, however, not accounted for the pre-processing noise filters commonly integrated with the ML-inference module. Our contribution in this work is to show that this is a major omission since these noise filters can render ineffective the majority of the existing attacks, which rely essentially on introducing adversarial noise. Apart from this, we also extend the state of the art by proposing a novel pre-processing noise Filter-aware Adversarial ML attack called FAdeML. To demonstrate the effectiveness of the proposed methodology, we generate an adversarial attack image by exploiting the "VGGNet" DNN trained for the "German Traffic Sign Recognition Benchmarks (GTSRB" dataset, which despite having no visual noise, can cause a classifier to misclassify even in the presence of pre-processing noise filters.


What is machine learning data poisoning?

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. It's not hard to tell that the image below shows three different things: a bird, a dog, and a horse. This example portrays one of the dangerous characteristics of machine learning models, which can be exploited to force them into misclassifying data. This is an example of data poisoning, a special type of adversarial attack, a series of techniques that target the behavior of machine learning and deep learning models. If applied successfully, data poisoning can provide malicious actors backdoor access to machine learning models and enable them to bypass systems controlled by artificial intelligence algorithms.