A new report details what privacy experts are calling a dangerous misapplication of facial recognition that uses photos of celebrities and digitally-doctored images to comb for criminals. According to a detailed investigation by Georgetown Law's Center on Privacy and Technology, one New York Police Department detective attempted to identify a suspect by scanning the face of actor Woody Harrelson. After footage from a security camera failed to produce results in a facial recognition scan, the detective used Google images of what he concluded to be the suspects celebrity doppelganger -- Woody Harrelson -- to run a test. The system turned up a match, says the report, who was eventually arrested on charges of petit larceny. In a new report from Georgetown University, an investigation shows that police have used celebrities to help its facial recognition software identify suspects.
Walking around without being constantly identified by AI could soon be a thing of the past, legal experts have warned. The use of facial recognition software could signal the end of civil liberties if the law doesn't change as quickly as advancements in technology, they say. Software already being trialled around the world could soon be adopted by companies and governments to constantly track you wherever you go. Shop owners are already using facial recognition to track shoplifters and could soon be sharing information across a broad network of databases, potentially globally. Previous research has found that the technology isn't always accurate, mistakenly identifying women and individuals with darker shades of skin as the wrong people.
A unit of the U.S. Department of Commerce has been using photos of immigrants, abused children and dead people to train their facial recognition systems, a worrying new report has detailed. The National Institute of Standards and Technology (NIST) oversees a database, called the Facial Recognition Verification Testing program, that'depends' on these types of controversial images, according to Slate. Scientists from Dartmouth College, the University of Washington and Arizona State University discovered the practice and laid out their findings in new research set to be reviewed for publication later this year. A unit of the U.S. Department of Commerce has been using photos of immigrants, abused children and dead people to train their facial recognition systems, a new report has detailed. The Facial Recognition Verification Testing program was first established in 2017 as a way for companies, academic researchers and designers to evaluate their facial recognition technologies.
With images aggregated from social media platforms, dating sites, or even CCTV footage of a trip to the local coffee shop, companies could be using your face to train a sophisticated facial recognition software. As reported by the New York Times, among the sometimes massive data sets that researchers use to teach artificially intelligent software to recognize faces is a database collected by Stanford researchers called Brainwash. More than 10,000 images of customers at a cafe in San Francisco were collected in 2014 without their knowledge. OKCupid and photo-sharing platforms like Flickr are among for researchers looking to load their databases up with images that help train facial recognition software. That same database was then made available to other academics, including some in China at the National University of Defense Technology.