Anti-adversarial machine learning defenses start to take root

#artificialintelligence 

Much of the anti-adversarial research has been on the potential for minute, largely undetectable alterations to images (researchers generally refer to these as "noise perturbations") that cause AI's machine learning (ML) algorithms to misidentify or misclassify the images. Adversarial tampering can be extremely subtle and hard to detect, even all the way down to pixel-level subliminals. If an attacker can introduce nearly invisible alterations to image, video, speech, or other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively. This is no idle threat. Eliciting false algorithmic inferences can cause an AI-based app to make incorrect decisions, such as when a self-driving vehicle misreads a traffic sign and then turns the wrong way or, in a worst-case scenario, crashes into a building, vehicle, or pedestrian.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found