Why Adversarial Image Attacks Are No Joke
Attacking image recognition systems with carefully-crafted adversarial images has been considered an amusing but trivial proof-of-concept over the last five years. However, new research from Australia suggests that the casual use of highly popular image datasets for commercial AI projects could create an enduring new security problem. For a couple of years now, a group of academics at the University of Adelaide have been trying to explain something really important about the future of AI-based image recognition systems. It's something that would be difficult (and very expensive) to fix right now, and which would be unconscionably costly to remedy once the current trends in image recognition research have been fully developed into commercialized and industrialized deployments in 5-10 years' time. Before we get into it, let's have a look at a flower being classified as President Barack Obama, from one of the six videos that the team has published on the project page: In the above image, a facial recognition system that clearly knows how to recognize Barack Obama is fooled into 80% certainty that an anonymized man holding a crafted, printed adversarial image of a flower is also Barack Obama.
Nov-29-2021, 16:10:10 GMT
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Government (0.76)
- Information Technology > Security & Privacy (1.00)
- Technology: