How MIT Students Fooled A Google Algorithm

#artificialintelligence 

Machine learning algorithms, which use large amounts of data to power everything from your email to language translation, are being heralded as the next big thing in technology. The only problem is, they're vulnerable. Over the last few years, researchers have shown how one type of machine learning algorithm called an image classifier–think of it as a program to which you can show a picture of your pet, and it will tell you if it's a dog or cat–are weak in a surprising way. These programs are susceptible to attacks from something called "adversarial examples." An adversarial example occurs when you show the algorithm what is clearly an image of a dog, but instead of seeing a dog, a glitch that human eyes can't detect make the classifier see a picture of guacamole instead.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found