Even Artificial Neural Networks Can Have Exploitable 'Backdoors'
Early in August, NYU professor Siddharth Garg checked for traffic, and put a yellow Post-it onto a stop sign outside the Brooklyn building in which he works. When he and two colleagues showed a photo of the scene to their road-sign detector software, it was 95 percent sure the stop sign in fact displayed a speed limit. The stunt demonstrated a potential security headache for engineers working with machine learning software. The researchers showed it's possible to embed silent, nasty surprises into artificial neural networks, the type of learning software used for tasks such as recognizing speech or understanding photos. Malicious actors can design that behavior to emerge only in response to a very specific, secret, signal, as in the case of Garg's Post-it.
Aug-25-2017, 16:02:21 GMT
- AI-Alerts:
- 2017 > 2017-08 > AAAI AI-Alert for Aug 29, 2017 (1.00)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: