Researchers create AI attacker to defeat AI malware defender
Adversarial models, already known to defeat the artificial intelligence behind image classifiers and computer audio, are also good at defeating malware detection. Last year, researchers from NVIDIA, Booz Allen Hamilton, and the University of Maryland probably felt justifiably pleased with themselves when they trained a neural network to ingest EXEs and spot malware samples among them. Their MalConv software ran a static analysis on executables (that is, it looked at the binaries but didn't run them), and they claimed up to 98 per cent accuracy in malware classification once their neural network had a big enough learning set. Alas, it's a neural network, and neural networks are subject to adversarial attacks. On Monday March 12th, 2018, this paper (by boffins from the Technical University of Munich, the University of Cagliari in Italy, and Italian company Pluribus One) described one way of defeating MalConv.
Mar-14-2018, 21:52:11 GMT
- Country:
- Europe
- North America > United States
- Maryland (0.27)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: