Amazon's controversial facial recognition technology has incorrectly matched more than 100 photos of politicians in the UK and US to police mugshots, new tests have revealed. Amazon Rekognition uses artificial intelligence software to identify individuals from their facial structure. Customers include law enforcement and US government agencies like Immigration and Custome Enforcement (ICE). It is not the first time the software's accuracy has been called into question. In July 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.
Facebook will face a class action law suit in the wake of its privacy scandal, a US federal judge has ruled. Allegations of privacy violations emerged when it was revealed the app used a photo-scanning tool on users' images without their explicit consent. The facial recognition tool, launched in 2010, suggests names for people it identifies in photos uploaded by users. Under Illinois state law, the company could be fined $1,000 to $5,000 (£700 - £3,500) each time a person's image was used without consent. The technology was suspended for users in Europe in 2012 over privacy fears but is still live in the US and other regions worldwide.
Amazon investors are turning up the heat on CEO Jeff Bezos with a new letter demanding he stop selling the company's controversial facial recognition technology to police. The shareholder proposal calls for Amazon to stop offering the product, called Rekognition, to government agencies until it undergoes a civil and human rights review. It follow similar criticisms voiced by 450 Amazon employees, as well as civil liberties groups and members of Congress, over the past several months. 'Rekognition contradicts Amazon's opposition to facilitating surveillance,' the letter states. '...Shareholders have little evidence our company is effectively restricting the use of Rekognition to protect privacy and civil rights.
We may remember 2018 as the year when technology's dystopian potential became clear, from Facebook's role enabling the harvesting of our personal data for election interference to a seemingly unending series of revelations about the dark side of Silicon Valley's connect-everything ethos. The list is long: High-tech tools for immigration crackdowns. YouTube algorithms that steer youths into extremism. Doorbells and concert venues that can pinpoint individual faces and alert police. Repurposing genealogy websites to hunt for crime suspects based on a relative's DNA.
Facebook users who felt that their privacy was violated by the website's use of facial recognition software -- which it uses to help identify and tag people in photographs -- won an early legal victory Thursday when a San Francisco federal judge rejected a request by the internet company to dismiss a lawsuit challenging its collection of biometric information. "The court accepts as true plaintiffs' allegations that Facebook's face recognition technology involves a scan of face geometry that was done without plaintiffs' consent," U.S. District Judge James Donato ruled. Three Illinois residents filed separate lawsuits -- that were later combined -- under the state's Biometric Information Privacy Act of 2008, which allows companies to be sued for failing to get consumers' consent before collecting or storing their biometric information, which includes "faceprints" used by Facebook (and also Google) for identifying people in photographs. Facebook introduced its face-recognition feature in 2010. California, where Facebook is based, does not have a law regulating the use of biometrics.