Goto

Collaborating Authors

 trick facial recognition


Deepfake attacks can easily trick facial recognition

#artificialintelligence

In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report. Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user. So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example.


This AI tool generates your creepy lookalikes to trick facial recognition

#artificialintelligence

If you're worried about facial recognition firms or stalkers mining your online photos, a new tool called Anonymizer could help you escape their clutches. The app was created by Generated Media, a startup that provides AI-generated pictures to customers ranging from video game developers creating new characters to journalists protecting the identities of sources. The company says it built Anonymizer as "a useful way to showcase the utility of synthetic media." The system was trained on tens of thousands of photos taken in the Generated Media studio. The pictures are fed to generative adversarial networks (GANs), which create new images by pitting two neural networks against each other: a generator that creates new samples and a discriminator that examines whether they look real. The process creates a feedback loop that eventually produces lifelike profile photos.


This AI tool generates your creepy lookalikes to trick facial recognition

#artificialintelligence

If you're worried about facial recognition firms or stalkers mining your online photos, a new tool called Anonymizer could help you escape their clutches. The app was created by Generated Media, a startup that provides AI-generated pictures to customers ranging from video game developers creating new characters to journalists protecting the identities of sources. The company says it built Anonymizer as "a useful way to showcase the utility of synthetic media." The system was trained on tens of thousands of photos taken in the Generated Media studio. The pictures are fed to generative adversarial networks (GANs), which create new images by pitting two neural networks against each other: a generator that creates new samples and a discriminator that examines whether they look real. The process creates a feedback loop that eventually produces lifelike profile photos.


AI-Fooling Glasses Could Be Good Enough to Trick Facial Recognition at Airports

#artificialintelligence

In the not-too-distant future, we'll have plenty of reasons to want protect ourselves from facial detection software. Even now, companies from Facebook to the NFL and Pornhub already use this technology to identify people, sometimes without their consent. But as fast as this technology develops, machine learning researchers are working on ways to foil it. As described in a new study, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill developed a robust, scalable, and inconspicuous way to fool facial recognition algorithms into not recognizing a person. This paper builds on the same group's work from 2016, only this time, it's more robust and inconspicuous.