Microsoft claims its facial recognition technology just got a little less awful. Earlier this year, a study by MIT researchers found that tools from IBM, Microsoft, and Chinese company Megvii could correctly identify light-skinned men with 99-percent accuracy. But it incorrectly identified darker-skinned women as often as one-third of the time. Now imagine a computer incorrectly flagging an image at an airport or in a police database, and you can see how dangerous those errors could be. Microsoft's software performed poorly in the study.
A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.
Facial recognition technology has dominated discussions in technology circles for some time now. Faced with increased surveillance in public spaces, it has become imperative for stakeholders to have some input on future deployments of these novel technologies. More importantly, the general public should have some degree of understanding of facial recognition and how it's being used today. Facial recognition is a term used to refer to technologies used to analyze and recognize faces from video recordings and still images. Advancements in image processing and AI have enabled today's computer to read even the subtlest details in the human face like eyelashes to differentiate people.
Facial recognition is arguably the most talked-about technology within the artificial intelligence landscape due to its wide range of applications and biased outputs. Several countries are adopting this technology for surveillance purposes, most notably China and India. Both are among the first countries to make use of this technology on a large scale. Even the EU has pulled back from banning this technology for some years and has left it for the countries to decide. This will increase the demand for professionals who can develop solutions around facial recognition technology to simplify life and make operations efficient.
The researchers have shown how it's possible to perturb facial recognition with patterned eyeglass frames. Researchers have developed patterned eyeglass frames that can trick facial-recognition algorithms into seeing someone else's face. The printed frames allowed three researchers from Carnegie Mellon to successfully dodge a facial-recognition system based on machine-learning 80 percent of the time. Using certain variants of the frames, a white male was also able to fool the algorithm into mistaking him for movie actress Milla Jovovich, while a South-Asian female tricked it into seeing a Middle Eastern male. A look at some of the best IoT and smart city projects which aim to make the lives of citizens better.