Adversarial ML: How AI is Enabling Cyber Resilience
Machine learning enables us to correctly classify a file as either benign or malicious over 99% of the time. But the question then becomes, how can this classifier be attacked? Is it possible to alter the file in such a way as to trick the classifier? We often make the mistake of assuming the model is judging as we judge, i.e., we assume the machine learning model has baked into it a conceptual understanding of the objects being classified. For example, let's look at lie detectors.
Oct-3-2019, 16:39:50 GMT