In the past few years, there's been a dramatic rise in the adoption of face recognition, detection, and analysis technology. You're probably most familiar with recognition systems, like Facebook's photo-tagging recommender and Apple's FaceID, which can identify specific individuals. Detection systems, on the other hand, determine whether a face is present at all; and analysis systems try to identify aspects like gender and race. All of these systems are now being used for a variety of purposes, from hiring and retail to security and surveillance. Many people believe that such systems are both highly accurate and impartial.
Cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the U.S. Capitol in Washington as he testified before a Senate panel last week. Cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the U.S. Capitol in Washington as he testified before a Senate panel last week. A federal judge in California has ruled that Facebook can be sued in a class-action lawsuit brought by users in Illinois who say the social network improperly used facial recognition technology on their uploaded photographs. The plaintiffs are three Illinois Facebook users who sued under a state law that says a private entity such as Facebook can't collect and store a person's biometric facial information without their written consent. The law, known as the Biometric Information Privacy Act, also says that information that uniquely identifies an individual is, in essence, their property.
A top Google executive recently sent a shot across the bow of its competitors regarding face surveillance. Kent Walker, the company's general counsel and senior vice president of global affairs, made it clear that Google -- unlike Amazon and Microsoft -- will not sell a face recognition product until the technology's potential for abuse is addressed. Face recognition, powered by artificial intelligence, could allow the government to supercharge surveillance by automating identification and tracking. Authorities could use it to track protesters, target vulnerable communities (such as immigrants), and create digital policing in communities of color that are already subject to pervasive police monitoring. So how are the world's biggest technology companies responding to this serious threat to privacy, safety and civil rights?
Amazon investors are turning up the heat on CEO Jeff Bezos with a new letter demanding he stop selling the company's controversial facial recognition technology to police. The shareholder proposal calls for Amazon to stop offering the product, called Rekognition, to government agencies until it undergoes a civil and human rights review. It follow similar criticisms voiced by 450 Amazon employees, as well as civil liberties groups and members of Congress, over the past several months. 'Rekognition contradicts Amazon's opposition to facilitating surveillance,' the letter states. '...Shareholders have little evidence our company is effectively restricting the use of Rekognition to protect privacy and civil rights.
SAN FRANCISCO -- Amazon's controversial facial recognition program, Rekognition, falsely identified 28 members of Congress during a test of the program by the American Civil Liberties Union, the civil rights group said Thursday. In its test, the ACLU scanned photos of all members of Congress and had the system compare them with a public database of 25,000 mugshots. The group used the default "confidence threshold" setting of 80 percent for Rekognition, meaning the test counted a face match at 80 percent certainty or more. At that setting, the system misidentified 28 members of Congress, a disproportionate number of whom were people of color, tagging them instead as entirely different people who have been arrested for a crime. The faces of members of Congress used in the test include Republicans and Democrats, men and women and legislators of all ages.