Microsoft's facial-recognition technology is getting smarter at recognizing people with darker skin tones. On Tuesday, the company touted the progress, though it comes amid growing worries that these technologies will enable surveillance against people of color. Microsoft's announcement didn't broach the concerns; the company merely addressed how its facial-recognition tech could misidentify both men and women with darker skin tones. Microsoft has recently reduced the system's error rates by up to 20 times. In February, research from MIT and Stanford University highlighted how facial-recognition technologies can be built with bias.
On May 14, 2019, the San Francisco government became the first major city in the United States to ban the use of facial-recognition technology (paywall) by the government and law enforcement agencies. This ban comes as a part of a broader anti-surveillance ordinance. As of May 14, the ordinance was set to go into effect in about a month. Local officials and civil advocates seem to fear the repercussions of allowing facial-recognition technology to proliferate throughout San Francisco, while supporters of the software claim that the ban could limit technological progress. In this article, I'll examine the ban that just took place in San Francisco, explore the concerns surrounding facial recognition technology, and explain why an outright ban may not be the best course of action.
Boston City Councilors voted unanimously to ban the use of facial-recognition technology by police -- technology the Boston Police Department currently doesn't use anyway due to its unreliability. All 13 councilors voted in favor of the order authored by Councilors Ricardo Arroyo and Michelle Wu to ban the city from using technology that matches people's faces. Mayor Marty Walsh's office said the mayor would review the legislation, not committing to whether he'd sign it or not. "It puts Bostonians at risk for misidentification," Arroyo said. A recent MIT study found that the technology was wrong more often when trying to identify darker-skinned people.
Let's say it together: Facial-recognition technology is a dangerous, biased mess. We are reminded of this obvious fact again with the news Friday that an innocent man, despite not looking like the perpetrator at all, was arrested last year after being falsely identified by faulty facial-recognition tech. This is the second known case of facial recognition software directly leading to the arrest of an innocent man. It's something privacy advocates fear will be a growing trend unless drastic action is taken to stop this technology in its tracks. Michael Oliver, then 25, was charged with a felony for supposedly grabbing a phone from a car passenger and throwing it, reports the Detroit Free Press.
King's Cross Central's developers said they wanted facial-recognition software to spot people on the site who had previously committed an offence there. The detail has emerged in a letter one of its managers sent to the London mayor, on 14 August. Sadiq Khan had sought reassurance using facial recognition on the site was legal. Two days before, Argent indicated it was using it to "ensure public safety". On Monday, it said it had now scrapped work on new uses of the technology.