A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.
The FBI maintains a huge database of more than 411m photos culled from sources including driver's licenses, passport applications and visa applications, which it cross-references with photos of criminal suspects using largely untested and questionably accurate facial recognition software. A study from the Government Accountability Office (GAO) released on Wednesday for the first time revealed the extent of the program, which had been queried several years before through a Freedom of Information Act request from the Electronic Frontier Foundation (EFF). The GAO, a watchdog office internal to the US federal government, found that the FBI did not appropriately disclose the database's impact on public privacy until it audited the bureau in May. The office recommended that the attorney general determine why the FBI did not obey the disclosure requirements, and that it conduct accuracy tests to determine whether the software is correctly cross-referencing driver's licenses and passport photos with images of criminal suspects. The Department of Justice "disagreed" with three of the GAO's six recommendations, according to the office, which affirmed their validity.
Microsoft claims its facial recognition technology just got a little less awful. Earlier this year, a study by MIT researchers found that tools from IBM, Microsoft, and Chinese company Megvii could correctly identify light-skinned men with 99-percent accuracy. But it incorrectly identified darker-skinned women as often as one-third of the time. Now imagine a computer incorrectly flagging an image at an airport or in a police database, and you can see how dangerous those errors could be. Microsoft's software performed poorly in the study.
Data brokers already buy and sell detailed profiles that describe who you are. They track your public records and your online behavior to figure out your age, your gender, your relationship status, your exact location, how much money you make, which supermarket you shop at, and on and on and on. It's entirely reasonable to wonder how companies are collecting and using images of you, too. Facebook already uses facial recognition software to tag individual people in photos. Apple's new app, Clips, recognizes individuals in the videos you take.
Our brains are wired in a way that they can differentiate between objects, both living and non-living by simply looking at them. In fact, the recognition of objects and a situation through visualization is the fastest way to gather, as well as to relate information. This becomes a pretty big deal for computers where a vast amount of data has to be stuffed into it, before the computer can perform an operation on its own. Ironically, with each passing day, it is becoming essential for machines to identify objects through facial recognition, so that humans can take the next big step towards a more scientifically advanced social mechanism. So, what progress have we really made in that respect?