Machine Learning Systems Vulnerable to Specific Attacks

#artificialintelligence 

The growing number of organizations creating and deploying machine learning solutions raises concerns as to their intrinsic security, argues the NCC Group in a recent whitepaper. The NCC Group's whitepaper provides a classification of attacks that may be carried through against machine learning systems, including examples based on popular libraries such as SciKit-Learn, Keras, PyTorch and TensorFlow platforms. Although the various mechanisms that allow this are to some extent documented, we contend that the security implications of this behaviour are not well-understood in the broader ML community. According to the NCC Groups, ML systems are subject to specific forms of attacks in addition to more traditional attacks that may attempt to exploit infrastructure or applications bugs, or other kind of issues. A first vector of risk is associated to the fact that many ML models contain code that is executed when the model is loaded or when a particular condition is met, such as a given output class is predicted.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found