Researchers develop 'vaccine' against attacks on machine learning

#artificialintelligence 

Algorithms'learn' from the data they are trained on to create a machine learning model that can perform a given task effectively without needing specific instructions, such as making predictions or accurately classifying images and emails. These techniques are already used widely, for example to identify spam emails, diagnose diseases from X-rays, predict crop yields and will soon drive our cars. While the technology holds enormous potential to positively transform our world, artificial intelligence and machine learning are vulnerable to adversarial attacks, a technique employed to fool machine learning models through the input of malicious data causing them to malfunction. Dr Richard Nock, machine learning group leader at CSIRO's Data61 said that by adding a layer of noise (i.e. an adversary) over an image, attackers can deceive machine learning models into misclassifying the image. "Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as speed sign, which could have disastrous effects in the real world. "Our new techniques prevent adversarial attacks using a process similar to vaccination," Dr Nock said. "We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more'difficult' training data set.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found