Machine Learning in 2022: Data Threats and Backdoors?

#artificialintelligence 

Machine-learning algorithms have become a critical part of cybersecurity technology, currently used to identify malware, winnow down the number of alerts presented to security analysts, and prioritize vulnerabilities for patching. Yet such systems could be subverted by knowledgeable attackers in the future, warn experts studying the security of machine-learning (ML) and artificial-intelligence (AI) systems. In a study published last year, researchers found that the redundant properties of neural networks could allow an attacker to hide data within a common neural network file, consuming 20% of the file size without dramatically affecting the performance of the model. In another paper from 2019, researchers showed that a compromised training service could create a backdoor in a neural network that actually persists, even if the network is trained to another task. While these two specific research papers show potential threats, the most immediate risk are attacks that steal or modify data, says Gary McGraw, co-founder and CEO of the Berryville Institute of Machine Learning (BIML).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found