Image Recognition A.I. Has a Weakness. This Could Fix It
You're probably familiar with deepfakes, the digitally altered "synthetic media" that's capable of fooling people into seeing or hearing things that never actually happened. Adversarial examples are like deepfakes for image-recognition A.I. systems -- and while they don't look even slightly strange to us, they're capable of befuddling the heck out of machines. Several years ago, researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) found that they could fool even sophisticated image recognition algorithms into confusing objects simply by slightly altering their surface texture. In the researchers' demonstration, they showed that it was possible to get a cutting-edge neural network to look at a 3D-printed turtle and see a rifle instead. Or to gaze upon a baseball and come away with the conclusion that it is an espresso.
Mar-13-2021, 19:55:41 GMT
- Country:
- North America > United States > Massachusetts (0.25)
- Industry:
- Automobiles & Trucks (0.71)
- Information Technology > Security & Privacy (0.72)
- Leisure & Entertainment > Games
- Computer Games (0.31)
- Technology: