The researchers have shown how it's possible to perturb facial recognition with patterned eyeglass frames. Researchers have developed patterned eyeglass frames that can trick facial-recognition algorithms into seeing someone else's face. The printed frames allowed three researchers from Carnegie Mellon to successfully dodge a facial-recognition system based on machine-learning 80 percent of the time. Using certain variants of the frames, a white male was also able to fool the algorithm into mistaking him for movie actress Milla Jovovich, while a South-Asian female tricked it into seeing a Middle Eastern male. A look at some of the best IoT and smart city projects which aim to make the lives of citizens better.
Facebook's Moments app uses facial recognition technology to group photos based on the friends who are in them. Amid privacy concerns in Europe and Canada, the versions launched in those regions excluded the facial recognition feature. Facebook's Moments app uses facial recognition technology to group photos based on the friends who are in them. Amid privacy concerns in Europe and Canada, the versions launched in those regions excluded the facial recognition feature. When someone tags you in a photo on Facebook, it's often a nice reminder of a shared memory.
Microsoft claims its facial recognition technology just got a little less awful. Earlier this year, a study by MIT researchers found that tools from IBM, Microsoft, and Chinese company Megvii could correctly identify light-skinned men with 99-percent accuracy. But it incorrectly identified darker-skinned women as often as one-third of the time. Now imagine a computer incorrectly flagging an image at an airport or in a police database, and you can see how dangerous those errors could be. Microsoft's software performed poorly in the study.
The FBI maintains a huge database of more than 411m photos culled from sources including driver's licenses, passport applications and visa applications, which it cross-references with photos of criminal suspects using largely untested and questionably accurate facial recognition software. A study from the Government Accountability Office (GAO) released on Wednesday for the first time revealed the extent of the program, which had been queried several years before through a Freedom of Information Act request from the Electronic Frontier Foundation (EFF). The GAO, a watchdog office internal to the US federal government, found that the FBI did not appropriately disclose the database's impact on public privacy until it audited the bureau in May. The office recommended that the attorney general determine why the FBI did not obey the disclosure requirements, and that it conduct accuracy tests to determine whether the software is correctly cross-referencing driver's licenses and passport photos with images of criminal suspects. The Department of Justice "disagreed" with three of the GAO's six recommendations, according to the office, which affirmed their validity.
These are just some of the questions being raised by lawmakers, civil libertarians, and privacy advocates in the wake of an ACLU report released last summer that claimed Amazon's facial recognition software, Rekognition, misidentified 28 members of congress as criminals. Rekognition is a general-purpose, application programming interface (API) developers can use to build applications that can detect and analyze scenes, objects, faces, and other items within images. The source of the controversy was a pilot program in which Amazon teamed up with the police departments of two cities, Orlando, Florida and Washington County, Oregon, to explore the use of facial recognition in law enforcement. In January 2019, the Daily Mail reported that the FBI has been testing Rekognition since early 2018. The Project on Government Oversight also revealed via a Freedom of Information Act request that Amazon had also pitched Rekognition to ICE in June 2018.