AI claims to be able to thwart facial recognition software, making you "invisible"

#artificialintelligence

A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.


The facial recognition software that could identify thousands of faces in Civil War photographs

Daily Mail - Science & tech

Facial recognition is being used to identify American Civil War soldiers who may have otherwise been lost in the sands of time. Computer scientist and history buff Kurt Luther created a free-to-use website, called Civil War Photo Sleuth, that uses facial recognition technology to cross-reference vintage photographs with a database and hopefully assign a name to unknown subjects. Luther was inspired to launch the website after he stumbled upon a wartime portrait of his great-great-uncle, who was a Union corporal in the Civil War. Then, the site's facial recognition technology goes to work, mapping as many as 27 'facial landmarks.' It uses those facial landmarks to compare the photo to the more than 10,000 identified photos in the site's archive.


Facial recognition software is biased towards white men, researcher finds

#artificialintelligence

New research out of MIT's Media Lab is underscoring what other experts have reported or at least suspected before: facial recognition technology is subject to biases based on the data sets provided and the conditions in which algorithms are created. Joy Buolamwini, a researcher at the MIT Media Lab, recently built a dataset of 1,270 faces, using the faces of politicians, selected based on their country's rankings for gender parity (in other words, having a significant number of women in public office). Buolamwini then tested the accuracy of three facial recognition systems: those made by Microsoft, IBM, and Megvii of China. The results, which were originally reported in The New York Times, showed inaccuracies in gender identification dependent on a person's skin color. Gender was misidentified in less than one percent of lighter-skinned males; in up to seven percent of lighter-skinned females; up to 12 percent of darker-skinned males; and up to 35 percent in darker-skinner females.


Amazon is under fire for selling facial recognition tools to cops

Mashable

Amazon has some explaining to do. The online retail giant has been caught providing facial recognition technology to law enforcement in Oregon and Orlando, according to documents obtained by the American Civil Liberties Union through a Freedom of Information Act Request. Emails obtained through the request show how Amazon has been advertising and selling its facial recognition product, Rekognition, for only a few dollars a month to law enforcement agencies -- in the hopes that they would encourage other agencies to sign up. The emails also show Amazon has marketed consulting services to law enforcement as well. SEE ALSO: What would an Amazon Alexa robot look like?


FBI uses questionable facial recognition software to comb vast photo database

The Guardian

The FBI maintains a huge database of more than 411m photos culled from sources including driver's licenses, passport applications and visa applications, which it cross-references with photos of criminal suspects using largely untested and questionably accurate facial recognition software. A study from the Government Accountability Office (GAO) released on Wednesday for the first time revealed the extent of the program, which had been queried several years before through a Freedom of Information Act request from the Electronic Frontier Foundation (EFF). The GAO, a watchdog office internal to the US federal government, found that the FBI did not appropriately disclose the database's impact on public privacy until it audited the bureau in May. The office recommended that the attorney general determine why the FBI did not obey the disclosure requirements, and that it conduct accuracy tests to determine whether the software is correctly cross-referencing driver's licenses and passport photos with images of criminal suspects. The Department of Justice "disagreed" with three of the GAO's six recommendations, according to the office, which affirmed their validity.