Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use

#artificialintelligence 

The test studied both how algorithms work on "one-to-one" matching, used for unlocking a phone or verifying a passport, and "one-to-many" matching, used by police to scan for a suspect's face across a vast set of driver's license photos. Investigators tested both false negatives, in which the system fails to realize two identical faces are the same, as well as false positives, in which the system identifies two different faces as being the same -- a dangerous failure for police, who could end up arresting an innocent person.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found