Microsoft claims its facial recognition technology just got a little less awful. Earlier this year, a study by MIT researchers found that tools from IBM, Microsoft, and Chinese company Megvii could correctly identify light-skinned men with 99-percent accuracy. But it incorrectly identified darker-skinned women as often as one-third of the time. Now imagine a computer incorrectly flagging an image at an airport or in a police database, and you can see how dangerous those errors could be. Microsoft's software performed poorly in the study.
A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.
The Department of Homeland Security (DHS) is trialing a new facial recognition technology at US borders aimed at keeping track of people as the enter and exit the country. Called the Vehicle Face System, the project is being spearheaded by Customs and Border Protection at the Anzalduas Border Crossing, located at the southern tip of Texas, in August, according to the Verge. Sophisticated cameras will take photos of people arriving and departing the US and match them with government documents like visas and passports. The cameras are expected to remain in operation at the crossing for a full year. A customs spokesperson told the Verge that the purpose of the project will be to'evaluate capturing facial biometrics of travelers entering and departing the US and compare those images to photos on file in government holdings'.
The researchers have shown how it's possible to perturb facial recognition with patterned eyeglass frames. Researchers have developed patterned eyeglass frames that can trick facial-recognition algorithms into seeing someone else's face. The printed frames allowed three researchers from Carnegie Mellon to successfully dodge a facial-recognition system based on machine-learning 80 percent of the time. Using certain variants of the frames, a white male was also able to fool the algorithm into mistaking him for movie actress Milla Jovovich, while a South-Asian female tricked it into seeing a Middle Eastern male. A look at some of the best IoT and smart city projects which aim to make the lives of citizens better.
American law enforcement agencies have created a massive facial recognition database. If you're an adult in the US, you might already be in it. According to a comprehensive report by the Center for Privacy & Technology at Georgetown Law, the law enforcement's database has 117 million American adults on file. The report says authorities used driver's license IDs from 26 states to build the database, which includes people who've never committed any kind of crime before. That's already a problem in and of itself, but it's compounded by the lack of oversight on how it's used.