Face scanners can be tricked

#artificialintelligence 

The accuracy and flexibility of facial recognition technology has seen it securing everything from smartphones to Australia's airports, but a team of security researchers is warning of potential manipulation after finding a way to trick the systems using deepfake images. Researchers within the McAfee Advanced Threat Research (ATR) team have been exploring ways that'model hacking' – also known as adversarial machine learning – can be used to trick artificial intelligence (AI) computer-vision algorithms into misidentifying the content of the images they see. This approach has previously been used to show how autonomous-car safety systems, which can read speed-limit signs and adjust the car's speed accordingly, could be tricked by modifying street signs with stickers that were misread by the systems. Subtle modifications to the signs would be picked up by the computer-vision algorithms but might be indiscernible to the human eye – an approach that the McAfee team has now successfully turned towards the challenge of identifying people from photos, as in the screening of passports. Starting with photos of two people – called A and B – ATR researchers used what they described as a "deep learning-based morphing approach" to generate large numbers of composite images that combined features from both.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found