Sneaky attacks trick AIs into seeing or hearing what's not there

New Scientist 

When it comes to AI, seeing isn't always believing. It's possible to trick machine learning systems into hearing and seeing things that aren't really there. We already know that wearing a pair of snazzy glasses can fool face recognition software into thinking you're someone else, but research from Facebook now shows that the same approach can fool other algorithms too. The technique – known as an adversarial example – could be used by hackers to trick driverless cars into ignoring stop signs or prevent a CCTV camera from spotting a suspect in a crowd. Show an algorithm a photo of a cat that's been manipulated in a subtle way and it will think it's looking at a dog.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found