A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.
Amazon has some explaining to do. The online retail giant has been caught providing facial recognition technology to law enforcement in Oregon and Orlando, according to documents obtained by the American Civil Liberties Union through a Freedom of Information Act Request. Emails obtained through the request show how Amazon has been advertising and selling its facial recognition product, Rekognition, for only a few dollars a month to law enforcement agencies -- in the hopes that they would encourage other agencies to sign up. The emails also show Amazon has marketed consulting services to law enforcement as well. SEE ALSO: What would an Amazon Alexa robot look like?
Civil liberties advocates are calling on Amazon to cease providing facial recognition technology to law enforcement agencies. "We demand that Amazon stop powering a government surveillance infrastructure that poses a grave threat to customers and communities across the country", a coalition let by the American Civil Liberties Union (ACLU) wrote in a letter to Amazon CEO Jeff Bezos. At issue is a tool known as "Rekognition" that allows users to compare anonymous faces against other images to try and establish identity. An explanatory post on Amazon's website notes that it offers "security and surveillance applications" that include "crime prevention" by identifying "persons of interest". According to emails obtained by the ACLU, multiple law enforcement agencies have harnessed the tool in their investigative work.
SAN FRANCISCO -- Amazon's controversial facial recognition program, Rekognition, falsely identified 28 members of Congress during a test of the program by the American Civil Liberties Union, the civil rights group said Thursday. In its test, the ACLU scanned photos of all members of Congress and had the system compare them with a public database of 25,000 mugshots. The group used the default "confidence threshold" setting of 80 percent for Rekognition, meaning the test counted a face match at 80 percent certainty or more. At that setting, the system misidentified 28 members of Congress, a disproportionate number of whom were people of color, tagging them instead as entirely different people who have been arrested for a crime. The faces of members of Congress used in the test include Republicans and Democrats, men and women and legislators of all ages.
Facial recognition technology used by the UK police is making thousands of mistakes - and now there could be legal repercussions. Civil liberties group, Big Brother Watch, has teamed up with Baroness Jenny Jones to ask the government and the Met to stop using the technology. They claim the use of facial recognition has proven to be'dangerously authoritarian', inaccurate and a breach if rights protecting privacy and freedom of expression. If their request is rejected, the group says it will take the case to court in what will be the first legal challenge of its kind. South Wales Police, London's Met and Leicestershire are all trialling automated facial recognition systems in public places to identify wanted criminals.