Even Artificial Neural Networks Can Have Exploitable 'Backdoors'

WIRED 

Early in August, NYU professor Siddharth Garg checked for traffic, and put a yellow Post-it onto a stop sign outside the Brooklyn building in which he works. When he and two colleagues showed a photo of the scene to their road-sign detector software, it was 95 percent sure the stop sign in fact displayed a speed limit. The stunt demonstrated a potential security headache for engineers working with machine learning software. The researchers showed it's possible to embed silent, nasty surprises into artificial neural networks, the type of learning software used for tasks such as recognizing speech or understanding photos. Malicious actors can design that behavior to emerge only in response to a very specific, secret, signal, as in the case of Garg's Post-it.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found