Goto

Collaborating Authors

How funky tortoiseshell glasses can beat facial recognition

#artificialintelligence

A team of researchers from Pittsburgh's Carnegie Mellon University have created sets of eyeglasses that can prevent wearers from being identified by facial recognition systems, or even fool the technology into identifying them as completely unrelated individuals. In their paper, Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, presented at the 2016 Computer and Communications Security conference, the researchers present their system for what they describe as "physically realisable" and "inconspicuous" attacks on facial biometric systems, which are designed to exclusively identify a particular individual. The attack works by taking advantage of differences in how humans and computers understand faces. By selectively changing pixels in an image, it's possible to leave the human-comprehensible facial image largely unchanged, while flummoxing a facial recognition system trying to categorise the person in the picture. Where the researchers struck gold was by realising that a large (but not overly large pair of glasses) could act to "change the pixels" even in a real photo.


These glasses trick facial recognition software into thinking you're someone else

#artificialintelligence

Facial recognition software has become increasingly common in recent years. Facebook uses it to tag your photos; the FBI has a massive facial recognition database spanning hundreds of millions of images; and in New York, there are even plans to add smart, facial recognition surveillance cameras to every bridge and tunnel. But while these systems seem inescapable, the technology that underpins them is far from infallible. In fact, it can be beat with a pair of psychedelic-looking glasses that cost just $0.22. Researchers from Carnegie Mellon University have shown that specially designed spectacle frames can fool even state-of-the-art facial recognition software.


Researchers Want to Protect Your Selfies From Facial Recognition

#artificialintelligence

Researchers have created what may be the most advanced system yet for tricking top-of-the-line facial recognition algorithms, subtly modifying images to make faces and other objects unrecognizable to machines. The program, developed by researchers from the University of Chicago, builds on previous work from a group of Google researchers exploring how deep neural networks learn. In 2014, they released a paper showing that "imperceptible perturbations" in a picture could force state-of-the art recognition algorithms to misclassify an image. Their paper led to an explosion of research in a new field: the subversion of image recognition systems through adversarial attacks. The work has taken on new urgency with the widespread adoption of facial recognition technology and revelations that companies like Clearview AI are scraping social media sites to build massive face databases on which they train algorithms that are then sold to police, department stores, and sports leagues.


Even a mask won't hide you from the latest face recognition tech

New Scientist

Face recognition software can now see through your cunning disguise – even you are wearing a mask. Amarjot Singh at the University of Cambridge and his colleagues trained a machine learning algorithm to locate 14 key facial points. These are the points the human brain pays most attention to when we look at someone's face. The researchers then hand-labelled 2000 photos of people wearing hats, glasses, scarves and fake beards to indicate the location of those same key points, even if they couldn't be seen. The algorithm looked at a subset of these images to learn how the disguised faces corresponded with the undisguised faces.


All it takes to steal your face is a special pair of glasses

#artificialintelligence

Your face is quickly becoming a key to the digital world. Computers, phones, and even online stores are starting to use your face as a password. But new research from Carnegie Mellon University shows that facial recognition software is far from secure. In a paper (pdf) presented at a security conference on Oct. 28, researchers showed they could trick AI facial recognition systems into misidentifying faces--making someone caught on camera appear to be someone else, or even unrecognizable as human. With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100% success rates.